Back to Blog Lobby

The Rise of Privacy-Enhancing AI: Key Regulatory Changes You Need to Track

AI’s appetite for data is relentless. But the rules around that data are hardening fast. From the EU’s GDPR to India’s new DPDP Act, regulators across the globe are drawing bright red lines around how personal data is collected, shared, and used. For enterprises hoping to scale AI in sensitive or regulated sectors, these changes are more than compliance checklists. They’re architectural constraints and strategic opportunities.

Let’s start with the basics: modern AI depends on high-quality data, often drawn from distributed, siloed, or third-party environments. But the very data that makes AI smart is increasingly the data you’re not allowed to touch. That’s why regulatory pressure is colliding with AI ambition and driving the rise of Privacy-Enhancing Technologies (PETs) as essential infrastructure.

Here’s what’s changed and what it means for AI teams working with sensitive data:

1. GDPR Enforcement Has Teeth Now

The EU’s General Data Protection Regulation (GDPR) isn’t new, but enforcement is entering a new phase. Fines are getting larger, more frequent, and more specific to AI use cases. The €1.2B Meta fine in 2023 was just the start. Authorities now scrutinize:

  • AI systems trained on personal data without proper consent
  • Data transfers to non-EU jurisdictions without adequate safeguards
  • Profiling or automated decision-making without transparency or opt-out

If your AI system uses EU personal data, privacy must be enforced in computation, not just in storage. PETs like federated learning , confidential computing and homomorphic encryption are among the few tools that meet these conditions by design.

2. The California Privacy Rights Act (CPRA) Targets AI Directly

As of 2023, the CPRA expands on CCPA and gives consumers the right to:

  • Know if they’re subject to automated decision-making
  • Opt out of it
  • Access “meaningful information” about logic involved

This pushes AI systems, especially those in finance, healthcare, and HR, into explainability and auditability zones. If your model uses customer data in a black-box way, you’re now exposed. Federated analytics and PET-based model training can provide transparency without compromising data control.

3. India’s DPDP Act Introduces Cross-Border Red Tape

India’s new Digital Personal Data Protection (DPDP) Act 2023 bans cross-border data transfers to non-trusted countries, enforces consent-first data use, and imposes strict fiduciary duties on data processors. With India’s population and tech sector, this isn’t a niche.

If you’re training or deploying AI in India, especially on financial or health data, you’ll need localized, privacy-preserving computation. That means no centralizing raw data. PETs like Federated learning , Confidential computing and Homomorphic encryption are now essential.

4. The EU AI Act is About to Redefine “High-Risk” AI

Expected to pass soon, the EU AI Act classifies AI systems into four risk levels. “High-risk” systems (think: biometric identification, critical infrastructure, credit scoring) will require:

  • Robust data governance and quality controls
  • Privacy-by-design mechanisms
  • Logging, traceability, and post-deployment monitoring

The act essentially mandates that sensitive AI systems be auditable and provably compliant. PETs aren’t just useful here—they’re one of the only toolsets that allow secure, collaborative development with full traceability.

Each of these regulations reflects a global shift: data use must now be provably private. Traditional controls like access management, tokenization, or data masking aren’t enough. What’s needed is a way to compute on sensitive data without ever exposing it.

That’s where Privacy-Enhancing Technologies step in. PETs enable enterprises to build AI systems that are compliant by design. For instance:

  • Homomorphic encryption allows analysts to run queries on encrypted data which is ideal for regulated investigations where even the query itself must stay hidden.
  • Federated learning trains models across decentralized datasets without moving or pooling data. This is key for healthcare, banking, or defense environments where data sharing is blocked by law or policy.
  • TEEs offer trusted infrastructure for sensitive workloads, from cross-border cancer research to financial fraud detection.

These technologies aren’t hypothetical. They’re already in production. Duality, for example, enables cross-institutional cancer model training in healthcare, secure intelligence investigation sharing in national security, and secure fraud detection in financial services, all without exposing sensitive data or violating jurisdictional boundaries.

The regulatory message is clear: if your AI strategy relies on pooling sensitive data, you’re on borrowed time. The future belongs to architectures that protect data-in-use. Not just for compliance but for trust, innovation, and long-term resilience.

If you’re building AI on regulated data, ask yourself:

  • Can your models operate without seeing the raw data?
  • Can you prove your workflows meet evolving regulatory thresholds?
  • Can you collaborate with partners across borders—without violating sovereignty?

The Bottom Line: Privacy Is Now a Compute Problem

Across all these jurisdictions, one pattern holds: regulations increasingly target how data is used, not just how it’s stored. That means AI systems must enforce privacy during model training, analytics, and inference.

At Duality, we build for exactly this reality:

  • Secure Query (FHE): Enables encrypted, compliant data access across jurisdictions without revealing the query target.
  • Secure Collaborative AI: Combines federated learning, TEEs, and DP to enable joint model building without sharing raw data.

If you’re still centralizing sensitive data to train models, the compliance clock is ticking. Privacy-preserving AI isn’t a feature, it’s a regulatory requirement.

Need to future-proof your AI workflows? We’ll show you how PETs can get you there compliantly, securely, and globally.

Sign up for more knowledge and insights from our experts