NEW DELHI: European Union (EU) lawmakers have passed the world’s first comprehensive regulation for artificial intelligence, called the AI Act. The rules, however, won’t take effect until 2025 at the earliest, leaving room for a lot of technological evolution until then.
“Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter,” Thierry Breton, the European commissioner who helped negotiate the deal, said in a statement.
The new law still needs to be approved by the European Parliament, though that will be a formality.
What the EU’s AI Act says
- European policymakers focused on AI’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy.
- Makers of the largest general-purpose AI systems will face transparency requirements.
- Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by AI, according to a report by the New York Times.
- Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7% of global sales.
- Policymakers agreed to what they called a “risk-based approach” to regulating AI, where a defined set of applications face the most oversight and restrictions. Companies that make AI tools that pose the most potential harm to individuals and society, such as in hiring and education, would need to provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems and assurances that the software did not cause harm like perpetuating racial biases. Human oversight would also be required in creating and deploying the systems, said the NYT report.
- Some practices, such as the indiscriminate scraping of images from the internet to create a facial recognition database, would be banned outright.
The new regulations will be closely watched globally. They will affect not only major AI developers but other businesses that are expected to use the technology in areas such as education, health care and banking.
The law sets a global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security.
But India seems to have decided that it will be taking a different approach to AI.
India not interested in law on AI
In the US, the Joe Biden administration recently issued an executive order focused in part on AI’s national security effects. Britain, Japan and other nations have taken a more hands-off approach, while China has imposed some restrictions on data use and recommendation algorithms.
Currently, there are no specific laws in India with regard to regulating AI.
In fact, IT minister Ashwini Vaishnaw recently informed Parliament that the Centre is not planning to regulate the growth or set any laws for AI in the country.
“The government is not considering bringing a law or regulating the growth of artificial intelligence in the country,” Vaishnaw said, but acknowledged that there are ethical concerns and risks around AI and the government has already started making efforts to standardise responsible AI and even promote the adoption of the best practices.
Officials said the upcoming Digital Personal Data Protection Bill 2022 will apply to AI developers who develop and facilitate AI technologies. As AI developers will be collecting and using massive amounts of data to train their algorithm to enhance the AI solution, they might be classified as data fiduciaries and will be held responsible for how personal data is used.
India looking to harness AI
The Centre has taken a proactive stance on technology, particularly AI, intending to position India as a global leader in the field.
The BJP-led government sees AI as a ‘kinetic enabler’ and wants to harness its potential for better governance and feels putting stringent regulations in place could stifle innovation.
Prime Minister Narendra Modi has repeatedly stressed the importance of AI in today’s world and recently said India is looking to “take a giant leap in AI to empower its citizens and is poised to be an active contributor to its evolution”.
India is set to host the Global Partnership on Artificial Intelligence (GPAI) Summit 2023 in New Delhi from December 12-14. India is a co-founder of GPAI, which brings along 28 member countries and the EU as its members to guide the responsible development and use of AI.
Regulation without law
Many experts said that instead of bringing in laws to govern AI, India may opt to use market mechanisms — such as principles-based accreditation — to tackle the situation.
The Ministry of Electronics and information Technology (MEITY), is the executive agency for AI-related strategies and has constituted committees to bring in a policy framework for AI, while the Niti Ayog has developed a set of seven responsible AI principles, which include safety & dependability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability and the protection and reinforcement of positive human values.