Clipped from: https://www.business-standard.com/economy/analysis/ai-adoption-dpdpa-privacy-trust-india-125122400617_1.html
India’s Digital Personal Data Protection Act is reshaping how organisations design and deploy AI, embedding consent, governance and privacy safeguards as the foundation for trust-driven adoption
)
Compliant AI systems foster transparency by showing users how their data is collected, used, and protected.
Listen to This Article
India’s Digital Personal Data Protection Act (DPDPA), enacted in August 2023, marks a pivotal moment for artificial intelligence (AI) adoption across organisations. By embedding privacy into regulatory DNA, it reshapes how institutions design, deploy, and scale AI systems. It also carves the path for building and enhancing the trust ecosystem for AI adoption.
Key implications for AI adoption
Consent-Centric Data Practices
Under the DPDPA, organisations (now “data fiduciaries”) must secure explicit, purpose-specific consent before collecting personal data—a tough challenge for AI models trained on large, diverse datasets. Securing granular consent for each data point dramatically increases operational complexity. It triggers a need to have AI-ready datasets to support the development, training, and adoption of AI models.
Purpose limitation and data minimisation
The law mandates that data collection be limited to necessary and declared purposes. AI’s traditional model of relying on broad, multi-use datasets now requires careful alignment with declared, limited purposes.
Rights of data principals
Individuals gain rights to access, correct, and erase their data. This “right to be forgotten” poses technical hurdles for AI models trained on personal data, which may need to be retrained to fully exclude erased data. AI models are required to be developed and trained on anonymised data, which may require a change AI algorithm.
ALSO READ: India’s DPDP rules: Shaping future of personal data privacy in digital era
Governance for significant data fiduciaries (SDFs)
Entities handling high-risk or sensitive data are classified as SDFs and must conduct Data Protection Impact Assessments (DPIAs), independent audits, and establish robust technical safeguards. These structures introduce disciplined governance frameworks that AI developers must integrate.
Data localisation and cross-border rules
The DPDPA may require certain datasets to remain within Indian boundaries, restricting global data flows essential for multi-national AI models. Organisations must structure data localisation strategies or risk non-compliance.
Privacy Considerations for AI Models
Anonymisation and pseudonymisation
To comply with minimisation and purpose limitations, AI systems must employ strong anonymisation techniques. However, incomplete practices risk re-identification, calling for state-of-the-art de-identification methods and rigorous assessment.
Consent management systems
Integrating standardised consent management frameworks, defined under DPDPA rules, is essential—especially for tracking permissions and ensuring efficient rights execution.
Privacy-enhancing technologies (PETs)
Techniques like differential privacy, federated learning, and secure enclaves become fundamental to respect data governance and privacy requirements while maintaining AI effectiveness.
ALSO READ: DPDP rules notified: Digital privacy law a reality after 14 years
Building trust through compliance
Compliant AI systems foster transparency by showing users how their data is collected, used, and protected. Governance structures such as DPIAs, audits, clear privacy notices, and accessible consent options enhance accountability. When users retain control—able to access, correct, or erase their data—their trust increases. In turn, organisations can confidently deploy AI innovations, opening paths for ethical growth and scalable adoption.
ALSO READ: DPDP rules lift cyber, D&O cover enquiries, but major demand yet to emerge
By embedding privacy into AI development, the DPDPA not only mitigates legal risk but also cultivates a foundation of trust—an invaluable asset for organisations aiming to sustain responsible, high-impact AI in India’s evolving data ecosystem.
(Nitin Shah, Partner – Digital Trust and Head – Cyber Security, Resilience and Privacy GRC, KPMG in India)
(Shikha Kamboj, Partner – Digital Trust and National Leader – Data Privacy and Ethics, KPMG in India)
(Disclaimer: These are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper)