Govt Working On Voluntary Codes Of Conduct For AI Companies
by Pooja Yadav · Inc42SUMMARY
- The Ministry of Electronics and Information Technology is reportedly working on voluntary codes of conduct and ethics for companies to follow for the work they do with AI and GenAI
- The voluntary code of conduct is expected to be released early next year
- This code is likely to include broad principles outlining measures companies can adopt during the training, deployment, and commercial sale of their LLMs and AI platforms
- Added to Saved Stories in Login
The Ministry of Electronics and Information Technology is reportedly working on voluntary codes of conduct and ethics for companies to follow for the work they do with AI and GenAI.
As per an ET report, these guidelines will serve as “informal directive principles,” targeting companies that create large language models (LLMs) or utilise data for training AI and machine learning models.
The voluntary code of conduct is expected to be released early next year.
“A law on AI is still some time away. We are talking to all stakeholders right now to see what can be included and trying to get the industry onboard on a common set of principles and guidelines,” an official told ET.
This code is likely to include broad principles outlining measures companies can adopt during the training, deployment, and commercial sale of their LLMs and AI platforms. It will also emphasise identifying and addressing potential instances of misuse of these technologies, according to a second official.
“The G7 members have developed an 11-point code of conduct for companies which work in the AI and gen-AI space. Though what we are trying to develop will be completely different, the idea will be the same,” the source added.
Earlier this year, the IT ministry issued an advisory directing platforms to ensure that their computational resources do not enable bias, discrimination, or pose a threat to the integrity of the electoral process through the use of AI, generative AI, LLMs, or similar algorithms.
The advisory also said that any AI models, large language models (LLMs), generative AI software, or algorithms that are under testing, in beta stages, or otherwise unreliable must obtain “explicit permission from the Government of India” before being deployed for users on the Indian internet.
The advisory was later withdrawn and requirements for companies to register their AI or LLM before deployment was also dropped.
This comes at a time when the government has been actively pushing for rapid adoption of AI, with an eye on streamlining access to government services and digital public goods.
Additionally, Minister of State (MoS) for Electronics and Information Technology Jitin Prasada recently said that the government has constituted an advisory group to formulate a framework to regulate AI.
Additionally, the Centre has also earmarked an outlay of INR 10,372 Cr under the India AI Mission to fuel India’s AI ecosystem. In August, the Centre sought bids for the empanelment of entities to offer AI services on cloud under the mission.