Description
The AIACT.IN Version 4, released in November 2024, serves as Indiaās first privately proposed draft for regulating AI technologies. It offers a comprehensive framework that can provide valuable insights for the Artificial Intelligence Safety Institute (AISI), which is currently under discussion by the Ministry of Electronics and Information Technology (MeitY). The draft bill outlines various provisions that align with the goals of AI safety and governance, making it a useful reference for shaping the AISIās objectives.
Risk-Centric Classification: AIACT.IN emphasizes a risk-based approach to AI regulation by categorizing AI systems into narrow, medium, high, and unintended risk categories. The billās focus on outcome and impact-based risks aligns with the AISIās goal of ensuring safe deployment of AI technologies across sectors.
Post-Deployment Monitoring: The draft bill advocates for continuous monitoring of AI systems post-deployment, especially high-risk systems. This aligns with the AISIās potential role in overseeing the lifecycle of AI technologies to ensure they remain safe and compliant with evolving standards after their release.
Ethics and Accountability: AIACT.IN introduces an āEthics Codeā for AI systems, which could serve as a foundational element for the AISI to develop ethical guidelines tailored to Indiaās unique socio-economic landscape. The bill also emphasizes transparency and accountability in AI-related government initiatives, which could be crucial for the AISI when setting up frameworks for public-private partnerships.
National Registry of AI Use Cases: The draft proposes a National Registry of Artificial Intelligence Use Cases to standardize and certify AI applications across sectors. This registry could serve as a model for the AISI to track and assess AI implementations in real-time, ensuring compliance with safety standards.
Content Provenance and IP Protections: The bill highlights content provenance mechanisms to trace the origins of AI-generated content, which could help mitigate risks related to misinformation or deepfakesāan area that may fall under the purview of the AISIās safety protocols.