Indian startups and companies conducting business in the EU or catering to EU clientele will need to adhere to the standards set forth in the Act
)
European Union
Listen to This Article
With the European Union giving a thumbs up to laws on artificial intelligence (AI), it has become the first to set up a precedent for other countries looking at coming up with laws governing AI. Experts in India are calling this a landmark legislation that sets a clear regulatory framework.
The act lays down rules and guidelines for specific risks associated with the use of AI in areas like biometric authentication, facial recognition, high-risk domains like healthcare, and deep fakes.
Click here to follow our WhatsApp channel
Indian startups and companies conducting business in the EU or catering to EU clientele will need to adhere to the standards set forth in the Act.
Further, the act will increase the cost and compliance burden of these companies, according to the experts.
“The regulation will require Indian companies to adjust their AI systems to meet the prescribed standards, undergoing conformity assessments, and implementing risk management measures if they are in the higher risk categorization. The compliance costs and regulatory burden could be significant, especially for smaller firms,” said Somshubhro Pal Choudhary, Co-founder, Bharat Innovation Fund (BIF) – a deep tech-focused venture capital firm.
Though the act will require companies to assess their AI models to determine their risk classification, it also allows sufficient time for compliance, explained Jameela Sahiba, Senior Programme Manager-AI Vertical, The Dialogue.
“The act allows time for compliance, as it will come into force twenty days after its publication in the official journal and will be fully applicable 24 months thereafter,” she said.
Further, the Act’s support for innovation through regulatory sandboxes can be leveraged by Indian startups to develop and test responsible AI solutions before market entry, she added.
Meanwhile, experts are of the opinion that the risk-based approach is perhaps applicable for the EU regions but each country will look at its own requirements.
When asked if this would be the case for India as well, Sahiba said, “While it will definitely offer lessons to India, it is important to note that India’s diverse socio-economic context, technological infrastructure, and regulatory framework differ significantly from that of the EU. In conversations so far around potential AI regulation, the Indian government has stressed upon a “user harms perspective” to AI regulation.”
She further added, “This emphasis on risk categorization establishes a clear regulatory framework. High-risk AI systems are set to face stringent regulations, including rigorous risk assessments, human oversight, and explainability requirements to ensure user trust.”
Further, the provision in the law to put tough restrictions on high-risk AI systems is a step that will ensure user trust in the technology, they said.
The regulation defines high-risk systems as something that can cause potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law.
“The EU AI Act is a landmark legislation since it is the first real regulation brought out in AI; so far, countries have only been talking of it. It extends the GDPR risk framework and puts the onus of obligations on the providers or developers of high-risk systems irrespective of where these providers are located,” said Jaspreet Bindra, Founder, TechWhisperer.
Experts also added that the Government of India has emphasised that regulations shall not stifle innovation.
The Indian government has also been looking at ways to regulate the harms of AI. It has issued advisories under the IT rules to curb deep fakes and the biases arising due to under-testing of AI models.
One challenge in the EU’s approach is to categorise the risk levels, according to the regulation.
“While the risk-based approach targets a more proportional approach, avoiding broad and stifling regulations, categorising the risk levels can be challenging and subjective, potentially leading to disputes,” said Choudhary of BIF.