Under the EU AI Act, AI risks are not to be assessed just once; rather, they must be continuously monitored and reassessed throughout the lifecycle of the AI system. The Act emphasizes a risk management approach that requires ongoing vigilance and updating of risk assessments as the system evolves and interacts with real-world environments.
Relevant Provisions of the EU AI Act
1. Article 9 – Risk Management System
The EU AI Act mandates the establishment of a risk management system that continuously evaluates and mitigates risks associated with the AI system. This risk management process is not a one-time event but an ongoing process designed to:
- Identify and analyze risks at each stage of development, deployment, and post-market use.
- Continuously monitor the AI system’s performance and assess new risks that may emerge during its operation.
- Regularly update the risk mitigation measures in response to real-world usage data and unforeseen challenges.
The trite requirement under the risk management system will be to ensure that the high foreseeable risks, which though pose risk to health, safety or fundamental rights but when are used are aligned with the intended purpose.
2. Article 61 – Post-Market Monitoring
Article 61 highlights the requirement for post-market monitoring of high-risk AI systems. Providers of such AI systems are obligated to implement and maintain a post-market monitoring plan to gather data on system performance in real-world settings. This provision ensures that providers:
- Regularly assess how the AI system operates in practice in real world.
- Detect new risks or changes in risk profiles once the AI system is deployed.
- Address and mitigate risks that were either not foreseeable during development or arose due to changes in the operating environment.
3. Article 43 – Conformity Assessments
Under Article 43, providers of high-risk AI systems are required to undergo conformity assessments before placing the system on the market. However, the conformity assessment process doesn’t end there. The EU AI Act stresses the need for providers to:
- Maintain updated documentation and technical files, which reflect any modifications, updates, or risk-related issues that occur post-deployment.
- Reassess the AI system if there are significant changes in its design, use, or deployment that could alter the risk profile.
4. Annex IV – Technical Documentation
Annex IV requires providers to maintain detailed technical documentation that describes the AI system’s risk management strategy and performance. This documentation must be kept up to date throughout the system’s lifecycle, reflecting any changes to the AI’s operational environment or updates based on the system’s actual performance in real-world scenarios.
Conclusion
The EU AI Act imposes a continuous risk assessment obligation for AI systems, especially high-risk AI.
Providers must consistently monitor, reassess, and adapt their systems to new risks as they emerge, ensuring that safety, transparency, and accountability are maintained over time. Risk assessments under the Act are, therefore, part of a dynamic and ongoing process.
To discuss further or for any feedback, you can reach us on ai@amlegals.com