Introduction

Social platforms designed for interaction between autonomous Artificial Intelligence (“AI”) agents present a new category of legal and technical risk. Unlike conventional platforms where user conduct is the primary source of exposure, AI-agent environments involve automated generation, storage, and exchange of data without continuous human oversight. This increases the likelihood that technical misconfigurations may lead to large-scale disclosure of information. The risks can be illustrated through emerging AI-agent platforms such as Moltbook, where automated agents interact, store prompts, and access external services. Security discussions surrounding such platforms have highlighted concerns relating to rapid AI-assisted development practices that may bypass structured security review. This raises the key legal question of responsibility for disclosure when the immediate act of exposure is performed by an autonomous system.

Autonomous Systems and Liability Attribution

AI-agent platforms differ from traditional intermediaries because agents may independently generate content, retrieve external data, and execute automated workflows. These characteristics create additional risk vectors, including unauthorized data scraping, prompt manipulation, unintended disclosure through persistent memory, and exposure of integration credentials. Prompt injection represents a particularly relevant concern. Malicious instructions embedded within natural language inputs may cause an agent to reveal confidential information or perform unintended actions without exploitation of conventional software vulnerabilities. Where agents maintain persistent memory or tool access, the consequences of such manipulation may extend beyond a single interaction. Despite the autonomy of these systems, Indian law does not recognise AI agents as separate legal persons. Liability therefore continues to rest with the natural or juristic entity that deploys or controls the system, subject to principles of negligence, contractual responsibility, and statutory compliance.

Regulatory Exposure under the Digital Personal Data Protection Act, 2023

The Digital Personal Data Protection Act, 2023 (“DPDP Act”) provides the principal framework governing personal data exposure in such environments. Platforms operating AI-agent ecosystems may qualify as Data Fiduciaries and are required under Section 8 to implement reasonable security safeguards and prevent personal data breaches. The Act does not distinguish between sensitive and non-sensitive personal data for purposes of security obligations. Consequently, conversational records, identifiers, and behavioral metadata generated by AI agents fall within the same protection requirement as other personal data. The enforcement structure also differs from earlier regimes. While Section 43A of the Information Technology Act focused on compensation for wrongful loss, the DPDP Act empowers the Data Protection Board of India to impose significant administrative penalties, potentially extending to ₹250 crore depending on the nature and severity of the contravention.

Criminal Exposure and the Legislative Context

The Jan Vishwas (Amendment of Provisions) Act, 2023 introduced decriminalisation of certain minor offences across multiple statutes and emphasised monetary penalties for regulatory non-compliance. However, criminal liability under Section 72A of the Information Technology Act, 2000 (“IT Act”) for wrongful disclosure of personal information in breach of lawful contract remains in force. As a result, organisations operating AI-agent platforms may face parallel exposure, including administrative penalties under the DPDP Act, civil liability arising from contractual or tort claims, and criminal consequences in cases involving intentional or unlawful disclosure.

Incident Reporting and CERT-In Requirements

Breach reporting obligations are particularly relevant for AI-driven platforms. The DPDP Act requires notification of personal data breaches to the Data Protection Board without undue delay, although it does not prescribe a fixed statutory timeline. Separately, directions issued by the Indian Computer Emergency Response Team (CERT-In) mandate reporting of specified cybersecurity incidents within six hours of noticing such incidents. Credential compromise, unauthorized access, and data leakage fall within the scope of reportable events. Non-compliance may attract action under Section 70B (7) of the IT Act. Given the automated and high-volume nature of AI-agent interactions, timely detection of such incidents may require continuous monitoring and structured incident response processes.

Intermediary Obligations and Safe Harbour Considerations

Where AI-agent platforms qualify as intermediaries, Safe Harbour protection under Section 79 of the IT Act depends upon compliance with due diligence obligations under the Intermediary Guidelines. Indian law does not currently prescribe a universal three-hour takedown requirement for AI-generated content. However, failure to respond appropriately to unlawful content, delays in grievance redressal, or inadequate control over automated dissemination may affect the availability of intermediary immunity. Since AI agents lack legal personality, responsibility for unlawful automated content may attach to the platform operator where due diligence obligations are not satisfied.

AMLEGALS Remarks

AI-agent platforms demonstrate how technical configuration errors may quickly develop into regulatory and legal exposure. The combined effect of obligations under the DPDP Act, CERT-In reporting requirements, intermediary due diligence standards, and residual criminal provisions establishes a demanding compliance environment for organisations deploying autonomous systems. For such entities, effective risk management requires implementation of appropriate access controls, monitoring mechanisms, and incident response procedures alongside legal compliance measures. Aligning system design with statutory expectations is essential to reduce the likelihood that technical failures will result in significant legal consequences.

For any queries or feedback, feel free to connect with mridusha.guha@amlegals.com or Khilansha.mukhija@amlegals.com

Leave a Reply

Your email address will not be published. Required fields are marked *

 

Disclaimer & Confirmation

As per the rules of the Bar Council of India, law firms are not permitted to solicit work and advertise. By clicking on the “I AGREE” button below, user acknowledges the following:

    • there has been no advertisements, personal communication, solicitation, invitation or inducement of any sort whatsoever from us or any of our members to solicit any work through this website;
    • user wishes to gain more information about AMLEGALS and its attorneys for his/her own information and use;
  • the information about us is provided to the user on his/her specific request and any information obtained or materials downloaded from this website is completely at their own volition and any transmission, receipt or use of this site does not create any lawyer-client relationship; and that
  • We are not responsible for any reliance that a user places on such information and shall not be liable for any loss or damage caused due to any inaccuracy in or exclusion of any information, or its interpretation thereof.

However, the user is advised to confirm the veracity of the same from independent and expert sources.