In a significant move aimed at regulating generative AI companies like Google and OpenAI in India, the Ministry of Electronics and Information Technology (MeitY) has issued an advisory to firms operating such platforms. The advisory emphasizes that these platforms, including foundational models and wrappers, must ensure that their services do not produce responses that violate Indian laws or jeopardize the integrity of the electoral process.
Platforms offering AI systems or large language models (LLMs) that are still in the testing phase or are deemed unreliable must seek explicit permission from the Centre before making them available to Indian users. Additionally, they are required to clearly label the potential fallibility or unreliability of the generated output.
The advisory comes in the wake of recent controversies surrounding Google’s AI platform Gemini, which faced scrutiny for generating responses related to Prime Minister Narendra Modi. MeitY had reportedly planned to issue a show-cause notice to Google over the matter. The directive also encompasses other platforms, such as Ola’s beta generative AI offering Krutrim’s hallucinations.
Minister of State for Electronics and IT, Rajeev Chandrasekhar, stated that the advisory serves as a precursor to future legislative actions to regulate generative AI platforms in India. Chandrasekhar highlighted the importance of seeking government permission, which would establish a sandbox environment for these companies. He mentioned that companies may be required to provide a demo of their platforms and the consent architecture they employ.
The advisory, sent to intermediaries including Google and OpenAI, also extends to platforms facilitating the creation of deepfakes, including Adobe. Companies are instructed to submit an action taken report within 15 days. The advisory stresses the need for transparency regarding the fallibility of AI models and prohibits any bias or discrimination that could threaten the integrity of the electoral process.
Chandrasekhar underscored the significance of safeguarding the electoral process, particularly with the upcoming Lok Sabha elections later this year. He acknowledged the potential misuse of misinformation and deepfakes to influence election outcomes, emphasizing the need for proactive measures to combat such threats.
The advisory aligns with the government’s commitment to ensuring the responsible and ethical use of AI technologies, especially in sensitive areas like electoral processes. By seeking government permission and adhering to transparency guidelines, AI companies are expected to contribute to a more trustworthy and secure digital environment in India.