The new rules require informed consent along with easy provisions to revoke it for any personal data processing
)
The rules further require entities such as ecommerce companies, online gaming platforms and social media intermediaries, to delete a user’s data if the individual has not logged in or used their services for three consecutive years.
Listen to This Article
With consent placed at the core of the Digital Personal Data Protection (DPDP) Rules, companies will need to rethink how they use customer data to train internal artificial intelligence (AI) models, including removing any training data where consent is not granted.
Retraining AI models may require significant cost and effort.
However, the data used for already trained models can be deleted from repositories on the lack of consent, without affecting the model’s efficiency, said industry experts.
This comes as companies processing user data — known as data fiduciaries — must clearly explain to users, or data principals, how their personal data will be used. Data fiduciaries must also provide an easy way for them to withdraw consent, according to the rules.
“The real problem is less about the trained models and more about the data behind them. Even if the model’s weights stay, user data stored in training repositories must be deleted on time, as it often remains in version-controlled systems used during training. It increases the governance of data,” said Ashok Hariharan, chief executive officer (CEO), IDfy.
The new rules require informed consent along with easy provisions to revoke it for any personal data processing.
Experts believe consent revoked by users altogether, after explanation of potential risks involved in data usage by fiduciaries, may further require retraining, spiking costs.
“Revoking consent means deleting data and retraining, which is a nightmare for large models (could spike costs by 20-40 per cent, say some recent industry estimates). DPDP requires granular consent upfront — separate for training versus other uses — with clear, multilingual notices explaining risks,” said Salman Waris, managing partner at Indian law firm TechLegis.
He added that AI model retraining may undergo a shakeup in India, since datasets are pulled from personal sources such as public records, social media or user interactions.
The rules further require entities such as ecommerce companies, online gaming platforms and social media intermediaries, to delete a user’s data if the individual has not logged in or used their services for three consecutive years.
Intermediaries are required to provide individuals with a 48-hour notice that their personal data will be deleted unless they log in to the service provider’s platform within this period.
Companies will be required to update and adapt their internal AI roadmaps with the new rules.
“Since one needs data to train models, what should they do if they don’t after three months? Companies will need to find a way to look at training the models,” said Sandeep Raghuwanshi, head of DevOps & InfoSec, Bureau.
The industry believes challenges may exist when it comes to third parties collecting and processing data.
Moreover, with an unbundled consent framework now in place, it remains to be seen how companies training their own models will adapt, given that model training depends on large amounts of data.
“Generally, the more data a model processes, the more effective it becomes. However, rules adopt an unbundled consent-based framework that does not fully account for the complex and large-scale data processing inherent in AI systems, which often involves additional “reasonable purposes” beyond direct consent,” said Kazim Rizvi, founding director, The Dialogue, a public policy advocacy body.
Firms may continue to work on AI-related innovation with the right technology, consent and data protection architecture in place.
“The DPDP framework does not prohibit the use of personal data for AI training, but it does expect companies to do it with clear purpose, legal basis and accountability. With the right consent architecture and data-governance pipelines, firms can continue to innovate at scale while giving users meaningful control. This is an opportunity for industry to build trust in AI systems from the ground up,” said Aman Taneja, partner, Ikigai Law.