Rajeev Chandrasekhar, Minister of State for Electronics and IT, clarified that the recent advisory by the IT Ministry regarding generative artificial intelligence (AI) systems does not mandate startups to seek government permission before deploying their technology. The advisory, which requires permission for deploying “untested” AI systems, primarily targets major technology corporations rather than budding startups, Chandrasekhar emphasized.
In a statement shared on the social media platform X, Chandrasekhar underscored that the advisory’s objective is to regulate significant platforms and does not extend to startups. He emphasized that seeking permission, labeling platforms under testing, and obtaining consent from users serve as protective measures for platforms, safeguarding them against potential consumer lawsuits.
While the advisory aims to mitigate the impact of content from major platforms on the upcoming Lok Sabha elections, it does not explicitly exempt startups. Despite this, Chandrasekhar highlighted the exemption, pointing out that startups are also capable of generating misleading information. Notably, concerns regarding Ola’s beta generative AI offering, Krutrim’s hallucinations, were recently raised, highlighting the relevance of regulations in the AI domain.
As India braces for the upcoming Lok Sabha elections, the IT Ministry issued an advisory to AI companies, including Google and OpenAI, cautioning against generating illegal responses or compromising the electoral process’s integrity. Platforms offering “under-testing/unreliable” AI systems to Indian users are required to seek explicit permission from the government and appropriately label the potential fallibility of their output.
Moreover, the government aims to introduce traceability requirements, ensuring that content generated by these platforms can be traced back to its source. Recent controversies surrounding Google’s AI platform Gemini, particularly its responses regarding Prime Minister Narendra Modi, underscore the necessity for stringent regulations in the AI landscape.
The advisory emphasizes the need for unique metadata or identifiers embedded in synthesized content to identify the originator, thereby curbing the dissemination of misinformation or deepfakes. Given that AI-generated outputs are influenced by various factors, including training data and algorithmic filters, regulatory measures are essential to address potential errors or misinformation generated by these platforms.
In conclusion, while the advisory primarily targets major AI platforms, it signifies a broader effort by the government to regulate AI technologies and ensure their responsible deployment, particularly in sensitive contexts such as elections.