Introduction

The European Parliament and the Council of the European Union have reached a provisional political agreement to amend the EU Artificial Intelligence Act (“AI Act”) as part of the broader “Digital Omnibus on AI” legislative initiative. One of the key developments under the proposed amendments is the prohibition of AI systems designed or used to generate non-consensual sexually explicit or intimate content, including so-called “nudifier” applications and AI-generated child sexual abuse material (“CSAM”). The proposed measures reflect the European Union’s continued commitment to safeguarding privacy, human dignity, bodily autonomy, and other fundamental rights in an increasingly AI-driven digital environment.

The Digital Omnibus on AI

The intent behind this nudification ban is twofold, firstly, to increase the ability of businesses, particularly small and medium-sized enterprises (“SMEs”) and small-to-mid-size companies to comply with the law, and secondly, reaffirming EU’s continuing commitment in protecting fundamental rights.

Additionally, the agreement provides for a detailed timeline for the implementation of high-risk AI systems, as some of these requirements will not take effect until 2027 or 2028, in order for the technical standards and support tools needed to establish compliance with those requirements to be developed. Thus, strengthening of those laws reflect the political consensus within the EU that certain uses of AI are at odds with EU values.

Reasoning behind the Ban

The substance of the ban is in the fact that any AI system that generates CSAM or display intimate body parts or sexually explicit acts of an identifiable individual without their consent will not be placed on the EU market for such purposes.

The prohibition is intentionally technology-neutral, meaning that whether the image, video and audio output produced from an AI system is generated using a so-called “nudifier” app or general-purpose generative AI that has image editing capabilities does not determine if the output produced from the AI system is prohibited by the AI Regulation. As such, AI Systems whose design or lack of safeguards allow for the systematic production of non-consensual intimate imagery, or CSAM are categorised as falling within the class of “prohibited practices” as defined in the AI Regulation.

Fundamental Rights and Gender‑Based Violence

The reason/s for banning this practice are based on the need to protect the fundamental rights of all persons, including privacy, data protection, equality, and the right to human dignity. The growing prevalence of sexualised deepfakes, and nudified images is a manifestation of gender-based violence by means of technology and perpetuate harmful misogynistic and sexually exploitative behaviours online.

The ban that the AI Act sets in is in addition to the obligations of online platforms that exist under the Data Structures & Algorithms (“DSAs”) to take action against all illegal content, including CSAM and some non-consensual intimate imagery. Additionally, the General Data Protection Regulation (“GDPR”) and national data‑protection laws also apply, since nudification tools process biometric and other sensitive personal data in ways that are rarely compatible with consent, purpose limitation or data‑minimisation principles. However, the AI Act seeks to address the problem by targeting the infrastructure supports that facilitate such crimes.

Obligations and Exposure for Platforms

The DSA and the AI Act impose obligations on large online service providers and hosting companies to make sure their platforms do not become large-scale distribution sites for sexual abuse generated by AI. Platforms that have incorporated or provided access to AI tools may need to monitor that embedded AI models comply with prohibitions set out in the AI Act and cannot readily be misused to create prohibited content as a result of the nudification ban. The regulations may encourage the development and use of deepfake detectors, watermarking technologies, and provenance trackers, but experts and regulators are concerned that detection is often not 100% accurate and may raise new issues regarding privacy and surveillance.

Victims’ Rights and Practical Protection

From the victim perspective, the ban also gives formal recognition to the fact that non-consensual sexual deepfakes are serious violations of victims’ rights as opposed to being viewed as just “online drama”. The ultimate goal of the EU with the enactment of this law is to prohibits the use of the AI systems which are used to create non-consensual images, to diminish the availability of these type of images and ensure that there is a clear authority in place for regulators to act prior to the occurrence of the harm instead of just removing the said content from the face of the internet after the fact.

AMLEGALS Remarks

The proposed prohibition has been widely welcomed as a necessary step towards addressing AI-enabled sexual abuse and protecting individual dignity but at the same time concerns have also been raised regarding its potential impact on legitimate research, innovation, and creative uses of generative AI. Critics argue that the absence of precise definitions for terms such as “non-consensual” and “sexually explicit” content may lead to broad or inconsistent interpretations, potentially creating a chilling effect on lawful experimentation, artistic expression, and technological development.

The proposed framework seeks to balance innovation with accountability by continuing to support regulatory sandboxes and excluding certain low-risk AI functionalities from the scope of prohibited or high-risk classifications. At the same time, the legislation emphasises that the development and deployment of AI systems must not come at the cost of privacy, human dignity, or other fundamental rights.

For any queries or feedback, feel free to connect with mridusha.guha@amlegals.com or Khilansha.mukhija@amlegals.com

Leave a Reply

Your email address will not be published. Required fields are marked *

 

Disclaimer & Confirmation

As per the rules of the Bar Council of India, law firms are not permitted to solicit work and advertise. By clicking on the “I AGREE” button below, user acknowledges the following:

    • there has been no advertisements, personal communication, solicitation, invitation or inducement of any sort whatsoever from us or any of our members to solicit any work through this website;
    • user wishes to gain more information about AMLEGALS and its attorneys for his/her own information and use;
  • the information about us is provided to the user on his/her specific request and any information obtained or materials downloaded from this website is completely at their own volition and any transmission, receipt or use of this site does not create any lawyer-client relationship; and that
  • We are not responsible for any reliance that a user places on such information and shall not be liable for any loss or damage caused due to any inaccuracy in or exclusion of any information, or its interpretation thereof.

However, the user is advised to confirm the veracity of the same from independent and expert sources.