The UK has a rare window of opportunity. Every major economy is racing to define what safe, sovereign, high-value AI looks like. And while the conversation often gravitates to model training, compute, or foundation models, the real bottleneck is something far more fundamental: our ability to use sensitive data and deploy AI systems with confidence and clarity.
That’s why the UK government’s proposal for a cross-economy AI Growth Lab matters. If designed correctly, it could unlock the most economically valuable and socially important AI applications in the UK – many use cases of which remain stalled today, not because the technology isn’t ready, but because regulation, risk, and organisational hesitation prevent them ever getting off the ground.
But achieving that impact requires more than another sandbox. It requires rethinking how regulation, data access, and secure infrastructure come together. And it requires learning from the many sandboxes around the world that have failed to scale beyond interesting experiments.
Here’s what the UK must get right.
The most common failure mode we’ve seen in single-regulator sandboxes worldwide is fragmentation. One regulator approves a pilot’s outcomes, another disagrees. Organisations are left navigating contradictory guidance, facing legal risk simply for innovating.
A cross-economy AI Growth Lab is the only model that can solve this.Not by adding more layers of bureaucracy, but by providing a single, coordinated decision at the end of a pilot:
“You can.”
“You cannot.”
Or: “You can, if…”
Without unified regulatory clarity, no organisation can take a use case from pilot to production, no matter how promising the technology. For this to work, the effort must be centrally owned and coordinated across relevant stakeholders – including regulators and possibly even legislators.
The biggest breakthroughs will not come from isolated projects. They will come from AI systems that rely on multi-party, cross-sector datasets, like:
Every one of these use cases is stressed under today’s regulatory fragmentation. A cross-economy Lab can remove those structural blockers by giving organisations one environment, one process, and one set of guardrails to test what’s possible safely.
Technology is not the limiting factor. Data access is.
Across every sector we work with, public, private, regulated, or otherwise, organisations hesitate to provide access to sensitive datasets because the interpretation of privacy law makes sharing seem unsafe, or even impossible. Even when collaboration is permitted, the processes are slow, manual, and prohibitively expensive.
The result is predictable: UK AI innovators can’t access the data needed to build the very systems that would benefit the UK the most.
The AI Growth Lab can directly address this by embedding privacy-enhancing technologies (PETs) and other secure infrastructure as default components of every pilot.
A privacy-enhanced environment would provide technical guarantees around data use, enabling real-world testing without exposing people or organisations to unnecessary risk.
We’ve participated in sandboxes from the UK to Singapore. The lesson is consistent: if a sandbox becomes a collection of academic exercises, industry disengages. If it produces repeatable, regulator-approved blueprints, the market accelerates.
The AI Growth Lab must prioritise use cases that:
Public outputs matter. The UK can establish regulatory patterns that hundreds of organisations can adopt without re-running the same pilot over and over again.
Trust is everything, especially when pilots involve sensitive data, models, and cross-organisation interaction. The Lab will need:
Not optional. Mandatory. Every participant should use controlled infrastructure designed to enforce privacy, security, and non-modification of red-line regulations – and continue to use the same infrastructure in the “real world” afterwards.
This reduces risk for participants and regulators, and prevents surprises at the end of a pilot.
Including data destruction or return-to-owner requirements, and named individuals responsible for compliance.
If the UK wants consistent regulatory guidance across health, finance, national security and defense, privacy, competition, and more, the Lab cannot be owned by a single regulator.
Only a central government model can mandate coordination and ensure the end result of each pilot is a clear, unified regulatory position.
Anything less will recreate the very barriers the Lab aims to remove.
The UK has a real opportunity to define how responsible, high-value, sovereign AI should be developed. But this won’t happen automatically. It depends on building an AI Growth Lab that is:
If we get this right, the UK can establish a global standard for how nations responsibly deploy AI on sensitive data, unlocking innovation that is currently impossible, and doing so in a way that protects people, organisations, and public trust.
The technology is ready. The use cases are clear. Now we need the regulatory infrastructure to match.