By Dr. Thorsten Troge, of Taylor Wessing
Dark patterns have been a controversial topic in the digital world as they involve manipulating users into making decisions that they may not have otherwise made. They can appear in many forms, including misleading wording, confusing design, and hidden options. AI can be used to make these patterns even more effective. For example, AI algorithms can be used to personalise dark patterns to individual users based on their browsing history, social media activity, or other data. This could make it even harder for users to recognise that they are being manipulated. AI can also be used to develop new and more sophisticated types of dark patterns, for example, to generate fake reviews, endorsements, or testimonials that are more convincing and harder to detect than those created by humans.
Overall, the role of AI in dark patterns is a concern because it can amplify the effectiveness of these manipulative tactics and create new and more sophisticated ways of deceiving and manipulating users. To protect users, the European Union has enacted legislation including the Digital Services Act (DSA), and is working on an AI Act. This legislation has significant implications for dark patterns in Europe, including in Germany, where advertising law plays a vital role in protecting consumers.
New EU rules covering dark patterns
The EU Commission declared its intention to tackle dark patterns in the Consumer Agenda 2020 and has done so in parts of the Unfair Commercial Practices Directive (UCPD), the DSA, and the Digital Markets Act (DMA).
The Digital Service Act contains regulations that explicitly address dark patterns. Coming into effect in May 2024, the DSA defines dark patterns as practices that materially distort or impair, either on purpose or in effect, the ability of recipients of the service to make autonomous and informed choices or decisions.
Article 25 of the DSA prohibits online platform providers from designing, organising, or operating their online interfaces in a way that users are deceived, manipulated, or otherwise materially distorting or impairing their ability to make free and informed choices. This Regulation is accompanied by the already existing UCPD, which generally addresses misleading or pressuring effects including dark patterns. The DMA prohibits “gatekeepers” from repeatedly soliciting consent through so-called nagging in Art. 5(2).
The current approach to regulating dark patterns is, consequently, somewhat fragmented and none of this legislation addresses additional dangers or non-transparency resulting from the use of AI. This means all eyes are on European Commission’s draft AI Act. The AI Act aims to establish a common regulatory framework for AI across the EU, with a particular focus on ensuring that AI is trustworthy and transparent. The draft Regulation currently defines AI as software that can perform tasks without being explicitly programmed to do so (although the precise wording of the definition is the subject of considerable debate). One of the key provisions of the AI Act is the prohibition of certain AI practices, including those that manipulate users through the use of dark patterns. Specifically, the Act prohibits the use of AI systems that “exploit vulnerabilities of specific groups of persons” or “use subliminal techniques beyond a person’s consciousness” to manipulate individuals into making decisions that they may not have otherwise made.
The prohibition of subliminal techniques beyond a person’s consciousness is particularly relevant to dark patterns. These techniques are often used in dark patterns to manipulate users into making decisions that they may not have otherwise made, without their conscious awareness of being manipulated. The AI Act seeks to prevent this type of manipulation.
Misleading dark patterns are unfair, but how about other manipulative designs?
Until the new EU legislation comes into force, practices of dark patterns are regulated in Germany[TW1] mainly by general unfair competition law (UWG) and telemedia law (TMG/MStV). This existing legislation already covers some – especially misleading – aspects of dark patterns. While the ‘blacklist’ of the UCPD does not list any single dark pattern, many dark patterns use misleading or non-transparent designs which fall within the scope of misleading advertising law (see Section 5 UWG/Article 6 UCPD, which prohibits providing incorrect information about the main characteristics of a product if it is likely to cause the average consumer to take a transactional decision they would not otherwise have taken).
Examples are so-called ‘scarcity patterns’ that suggest a scarce availability of goods or services through prompts on the user interfaces. However, many scarcity patterns fly under the radar of misleading advertising law because they do not contain untrue or inaccurate information. For example, stating that a product in a virtual shopping cart in an online shop is reserved for only 15 minutes is a fact. While factually accurate, the reservation period may have been deliberately chosen by the online shop operator to be short in order to induce a purchase decision. The reference to other users of a booking portal who are “currently looking” at a certain offer may also exert psychological pressure in certain situations. Be that as it may, no pressure is caused by the deception of the design itself. The psychological or emotional effect on the user of the information given and the manipulative effect (if any) is harder to catch as an unfair commercial practice.
Some dark patterns are covered by the general prohibitions on aggressive commercial practices that limit the consumer’s freedom of choice or conduct with regard to the product, thereby distorting their economic behaviour (Section 4a UWG/Articles 8 and 9 of the UCPD). Aggressive acts are, in particular, harassment, coercion, or undue influence, and some pressurising dark patterns qualify as harassment. Nevertheless, there needs to be “significant” impairment. Nagging (repeated requests to perform a certain action) or click-fatigue patterns (a high number of clicks to reach certain options like unsubscribing) may fall under this prohibition, but no clear line has yet been set by the courts or authorities.
AI-specific dark patterns
The AI Act will bring additional transparency to situations when AI has been used or is communicating with the user. Given the advanced communication skills of recent AI chat and natural language understanding (NLU) bots, such as ChatGPT, users may often not even be aware that they are communicating with software and not a human being. For so-called social bots, a labelling obligation already applies in Germany to providers of Telemedia in social networks under Article 18(3) of the German Interstate Media Treaty. The European Commission is currently discussing further regulation: Article 52 of the draft AI Act provides for a labelling obligation for AI systems that are intended to interact with natural persons and are designed and developed in such a way that natural persons are informed that they are interacting with an AI system unless this is obvious from the circumstances and the context of use. In the event of an infringement, fines of up to EUR 20million may be imposed or up to 2% of the total worldwide annual turnover of the previous business year.
Will further regulation be needed?
The existing regulatory framework covers the most obvious unfair commercial practices around dark patterns and practices and further protections around AI-powered dark patterns can be expected when the European AI Act enters into force. However, the nuanced and advanced forms of influencing users with the help of AI will make it extremely difficult to enact rules that adequately address and regulate what will undoubtedly be increasingly sophisticated dark patterns.
It will be interesting to see whether, as the AI Act progresses, and given the issues caused by AI-powered dark patterns, additional provisions relating specifically to dark patterns will be added by the legislators. If not, an alternative way of regulating them in the context of advertising and e-commerce would be to include specific commonly used dark patterns in the UPCD’s blacklist of unfair commercial practices.
One important factor will be how the market, users and society react to any future AI developments. This will also determine whether or not further regulation will be called for.
This article first appeared in Taylor Wessing’s Interface.