Meity has released “Report on AI Governance Guidelines Development” on 5th January,2025. The timing is just two days after the draft Digital Personal data Protection Rules,2025(DPDP Rules) has been released for public circulation and feedback.
This write up aims to deal with a limited perspective of data privacy with respect to the AI Advisory Report.
For better understanding, it has been divided into two parts.
Part -A
It emphasizes the necessity of an ecosystem approach to AI governance, where multiple actors are involved across the lifecycle of an AI system.
1. AI Ecosystem
These actors include data principals, data providers, AI developers, AI deployers, and end-users, all of whom play interconnected roles in the AI ecosystem.
The interconnected roles of various stakeholders can be ascribed as below.
- Data Principals: Individuals whose personal data is being processed. Their privacy rights are at the core of data protection considerations.
- Data Providers: Entities supplying data for AI training and operations. They bear significant responsibility for ensuring data quality, integrity, and compliance with privacy laws.
- AI Developers (including Model Builders): Responsible for designing and training AI systems. They must embed privacy-by-design principles and ensure compliance with data protection regulations throughout the development process.
- AI Deployers (including App Builders and Distributors): Tasked with implementing AI applications. They must ensure that deployed systems respect user privacy and comply with relevant laws.
- End-users (Businesses and Citizens): While primarily beneficiaries of AI systems, they also play a crucial role in providing feedback and identifying potential privacy risks or violations.
- Government and Regulatory Bodies: Responsible for creating and enforcing data privacy frameworks specific to AI. This includes bodies like the Ministry of Electronics and IT (MeitY) and sector-specific regulators.
- Academia and Research Institutions: Contribute to developing privacy-enhancing technologies and ethical frameworks for AI governance.
By adopting an ecosystem view, governance can become more holistic and effective, ensuring better distribution of responsibilities and liabilities among actors.
This approach enables collaboration, ensures accountability, and allows for the mitigation of risks across the entire AI lifecycle, ultimately leading to more robust and inclusive governance outcomes.
2. Definition and Scope of AI Systems
The AI Advisory Report has taken a nuanced approach to defining Artificial Intelligence (AI) and AI systems. Rather than providing a rigid definition, it acknowledges the evolving nature of AI technologies. The report describes AI as:
- A range of technologies capable of performing complex tasks often without active human control or supervision.
- Systems that can generate outputs that may be unexpected or difficult for humans to fully comprehend.
- Technologies driven by significant advancements in machine learning, access to large datasets, computational performance, natural language processing, and the widespread availability of connected devices.
This broad conceptualization allows for flexibility in governance approaches, focusing on the effects and potential harms of AI rather than being constrained by a narrow definition. This is particularly relevant for data privacy considerations, as it enables a more adaptive regulatory framework.
Part – B
With Privacy in the centre stage, the bedrock of the AI Report can be structured and best understood in the following manner:
1. Privacy as a Core Principle
- The report emphasizes that AI systems must be developed, deployed, and used in compliance with applicable data protection laws.
- This includes respecting users’ privacy and ensuring mechanisms for data quality, data integrity, and “security by design.”
- AI systems should incorporate privacy enhancing technologies (PETs) to minimize risks of data misuse or breaches.
2. Alignment with the Digital Personal Data Protection Act,2023 (DPDPA)
- The DPDPA is highlighted as a key legal framework for ensuring data privacy in AI systems.
- The DPDPA mandates data fiduciaries to implement appropriate security safeguards to protect personal data against breaches.
AI systems must comply with the DPDPA’s principles, including:
- Purpose limitation: Data collected must be used only for the specific purposefor which it was gathered.
- Data minimization: Only the minimum amount of data necessary for the intended purpose should be collected.
- Consent management: AI systems must ensure that data principals provide informed consent for data usage.
3. PrivacyEnhancing Technologies (PETs)
- The report encourages the adoption of Privacy Enhancing Technologiesto protect user data during the development and deployment of AI systems.
Examples include:
- Synthetic data generation: Creating artificial datasets that mimic real data without exposing sensitive information.
- Machine unlearning: Techniques to remove specific data points from AI models to comply with user requests or legal requirements.
- Federated learning: Training AI models across decentralized data sources without transferring raw data to a central location.
The report’s recommendations align with global best practices in AI governance and data privacy.
4. Data Governance and Traceability
- The report stresses the importance of data governance frameworksto ensure the traceability of data used in AI systems.
This could be implemented by;
- Tracking the lifecycle of data from collection to processing and usage.
- Ensuring transparency in how data is used to train AI models.Implementing mechanisms to audit and verify data quality and integrity.
5. Addressing Privacy Risks in AI Systems
The report identifies specific privacy risks associated with AI systems, including:
- Unauthorized data usage: AI systems using personal data without proper consent or for unintended purposes.
- Data breaches: Risks of sensitive data being exposed due to vulnerabilities in AI systems.
- Bias and discrimination: Privacy violations arising from biased datasets or discriminatory AI outputs.
- To mitigate these risks, the report recommends for Regular audits of AI systems to ensure compliance with privacy laws.
- Transparency measures, such as publishing model cards and data usage reports.
6. Human Centered Values & Regulatory Gap
AI systems should be subject to human oversight i.e Human-in-the-loop (HITL) to prevent undue reliance on automated decision making that could infringe on privacy rights:
- The lack of clarity on the use of personal data for training AI models.
- The need for guidelines on data anonymization and de-identification to ensure compliance with privacy laws.
- The absence of specific provisions for addressing privacy violations caused by AI systems.
- Recommendations include for strengthening the DPDPA to address AI specific privacy concerns.
- Advocating for introduction of new provisions under the proposed Digital India Act (DIA) to regulate AI systems’ data usage.
7. Voluntary Industry Commitments
The report encourages industry players to adopt voluntary commitments to enhance data privacy, such as:
- Publishing transparency reports on data usage and privacy safeguards.
- Conducting internal and external audits to ensure compliance with privacy standards.Implementing robust consent management systems to empower users.
8. Technological Measures for Privacy Protection
The report highlights the role of technology in enhancing data privacy, including:
- Watermarking and labelling: Techniques to track and identify data sources and ensure accountability.
- Content provenance tracking: Mechanisms to trace the origin and modifications of data used in AI systems.
- Automated compliance tools: Using AI to monitor and enforce data privacy regulations.
9. AI Incident Database
- The report proposes the creation of an AI Incident Database to document privacy violations and other risks associated with AI systems.
- This database would serve as a repository of real-world incidents to inform governance measures.
- It has focus on harm mitigation rather than fault finding, encouraging voluntary reporting by stakeholders.
10. Whole of Government Approach
- The report advocates for a whole of Government approach to address privacy concerns in AI systems.
- It emphasises upon coordinating efforts across regulators and government departments to ensure consistent enforcement of privacy laws.
- Developing a common roadmap for AI governance that prioritizes data privacy and security.
11. Challenges and Future Directions
The report acknowledges several challenges in implementing robust data privacy measures in AI systems:
- Evolving Nature of AI: The rapid advancement of AI technologies necessitates flexible and adaptive privacy governance frameworks.
- Balancing Innovation and Protection: Striking the right balance between fostering AI innovation and ensuring stringent data privacy protections.
- Cross Sectoral Implications: Addressing the varying levels of regulatory oversight across different sectors while maintaining a consistent approach to data privacy in AI.
- Capacity Building: Enhancing the technical and regulatory capabilities of governance bodies to effectively oversee AI systems and their data privacy implications.
Conclusion
The report underscores the critical importance of data privacy in AI governance. It emphasizes compliance with existing laws like the DPDPA, the adoption of privacy-enhancing technologies, and the need for robust data governance frameworks.
By addressing privacy risks and legal gaps, the report aims to foster trust in AI systems while enabling innovation.
The final take will be resting upon the FAT Principle i.e fairness, Accountability and Transparency in the first place.
Team AMLEGALS
For any queries or feedback, feel free to connect to mridusha.guha@amlegals.com