We’ve all watched GenAI explode and with it, the same old problem: the best models crave the most sensitive data. For most of our customers, that data sits behind strict privacy, compliance, and sovereignty walls. They can’t move it. They can’t expose it. Which meant, until now, GenAI couldn’t reach it.
That changes today.
We’re excited to share a new collaboration with NVIDIA and Google Cloud bringing secure, confidential GenAI to enterprise and government environments that were previously locked out. Together, we’ve built a privacy-preserving GenAI stack that lets you fine-tune, infer, and run RAG on sensitive data safely and compliantly.
This isn’t just another integration. It’s a full-stack capability that brings together:
As our CEO, Alon Kaufman, put it:
“This collaboration makes secure, end-to-end GenAI workflows possible across multiple stakeholders, data locations, and clouds without compromising speed, trust, or compliance.”
The idea came straight from real-world needs:
Banks trying to fight fraud without sharing customer data.
Hospitals collaborating on cancer models while protecting patient privacy.
Government agencies running AI-driven analysis without revealing sources or queries.
These aren’t proofs of concept, they’re live projects that have been stuck behind technical and regulatory barriers. Until now.
By combining trusted execution with GPUs and policy-based controls, we’re enabling GenAI that respects boundaries jurisdictional, organizational, and ethical. Whether your data sits across countries, partners, or clouds, you can now run AI where it matters most without giving up control.
This is the shift we’ve been pushing toward. Not just bigger models, but smarter systems ones that enforce privacy by design, unlock the value of sensitive data, and work in the real world.
If you’re building GenAI in a high-trust, high-stakes environment, this is your new foundation.
Let’s get to work.