AI TRiSM (Trust, Risk, and Security Management) is a framework introduced by Gartner to ensure the security, privacy, and governance of AI applications throughout their lifecycle. It aims to address the key risks and compliance challenges associated with deploying artificial intelligence at scale. In simple words, think of AI TRiSM like a safety net for AI, making sure it behaves properly, is safe to use, and follows the rules just like a well-guarded playground.
The AI TRiSM framework is structured as a hierarchical security stack, much like a pyramid, where each layer builds upon the foundational security established by the layers beneath it. This layered structure allows for secure and accountable operations from the ground up.
As data-driven operations continue to expand, Gartner has identified key market trends that are shaping how organizations secure, govern, and manage their digital assets. These trends illustrate the growing emphasis on unified platforms, strategic budgeting, and the expansion of runtime security measures.
Teams Shift — AI engineering teams are becoming responsible for privacy and security aspects, directly managing TRiSM strategies.
Budget Authority Changes — The budget for AI privacy, security, and risk initiatives is increasingly under the CTO or CIO, emphasizing strategic alignment with IT governance.
Unified Runtime Inspection Systems — Integrated platforms are emerging to centralize runtime inspection and policy enforcement.
AI Hosting Providers Expand TRiSM Services — Hosting providers are embedding TRiSM functionalities to secure AI models.
Market Consolidation — Mergers and acquisitions are accelerating as AI governance and runtime inspection converge.
Traditional Technology Protection encompasses foundational security measures to safeguard data-driven operations from unauthorized access and threats. These measures are designed to ensure data confidentiality, integrity, and availability across its lifecycle.
Examples:
Duality does not specifically provide Traditional Technology Protection solutions directly. Instead, it utilizes industry-standard security mechanisms as part of its platform integration. For example, Duality enforces secure communication, role-based access control (RBAC), and secure API interactions as best practices, leveraging existing technologies from cloud providers and cybersecurity platforms.
Infrastructure and Stack encompasses the foundational hardware and software components that support data-driven operations. This includes computing resources, storage systems, networking, and the software frameworks that facilitate data processing and analysis. A robust infrastructure ensures scalability, reliability, and security for applications handling sensitive data.
Examples:
Duality does not specifically build or manage infrastructure solutions. Instead, it leverages best practices and technologies from leading cloud and cybersecurity providers. Duality integrates with Confidential Computing environments and enables secure AI workloads through Trusted Execution Environments (TEEs) and cryptographic methods, directly aligning with the infrastructure needs of secure AI systems.
Information Governance (IG) refers to the overarching framework that manages the lifecycle of information within an organization, ensuring its accuracy, accessibility, and compliance with relevant regulations. This encompasses policies, procedures, and technologies that oversee data creation, storage, usage, and disposal, aiming to mitigate risks and enhance the value of information assets.
Examples:
Duality excels in this layer by leveraging PETs to protect data during model training, inference, and federated analytics. Aligned with GDPR and other privacy regulations, Duality enables computation on sensitive assets from multiple entities.
Duality also utilizes data governance policies to determine exactly who can access and perform computations on data, ensuring clear boundaries and compliance with privacy standards.
AI Runtime Inspection and Enforcement involves the real-time monitoring and regulation of AI systems during their operation. This ensures that AI models behave as intended, adhere to established policies, and do not produce unintended or harmful outcomes. By continuously inspecting AI outputs and enforcing rules, organizations can promptly detect anomalies, prevent policy violations, and maintain trust in AI applications. Importantly, it also safeguards against potential threats posed by malicious insiders in partner organizations, preventing unauthorized exposure or manipulation of sensitive data during collaborative AI operations.
Examples:
Duality supports runtime enforcement by enabling secure inference and training execution, which supports runtime assurance by design/ Data is always protected, and policies assure execution within predefined approved rules. This is achieved using a combination of multiple PETs like FHE, FL, TEE, and DP to ensure data is processed securely and without exposure to unauthorized parties, even during cross-institutional collaborations.
AI Governance encompasses the set of policies, procedures, and ethical guidelines that direct the development, deployment, and management of AI technologies. It aims to ensure that AI systems are transparent, accountable, and aligned with organizational values and societal norms. Effective AI governance mitigates risks associated with AI, such as bias or unintended consequences, and promotes responsible innovation.
Examples:
Duality provides policy-based access control and governance capabilities that ensure models are used ethically and securely. Built-in auditing and traceability features make it easier to validate model usage and compliance.
The AI TRiSM framework represents a multi-layered approach to managing the security, trust, and governance of data-driven operations. By addressing both traditional security measures and emerging AI-specific risks, it provides a structured method to safeguard sensitive information across its lifecycle and usage.
Duality fits perfectly within Gartner’s TRiSM framework by providing crucial building blocks that address key areas of data protection, runtime assurance, and governance. While Duality does not address all aspects of the framework, its innovative technologies, such as PETs and Confidential Computing, enable secure, privacy-preserving data operations that align with TRiSM’s core principles.
Below is a breakdown of the key layers of AI TRiSM, their definitions, and how Duality fits within each of them:
Layer | Definition | Duaity’s Fit |
Traditional Technology Protection | Foundational security for data-driven operations, including encryption, RBAC, and secure APIs | Leverages best practices in secure communication, RBAC, and API protection. |
Infrastructure and Stack | Hardware and software environments for data processing and AI workloads | Integrates with Confidential Computing environments and uses TEEs for secure processing. |
Information Governance | Managing data privacy, compliance, and lifecycle security | Strong fit with federated analytics, PETs, and governance policies to define data access. |
AI Runtime Inspection and Enforcement | Real-time monitoring and policy enforcement for data operations | Supports secure inference and execution through PETs (FHE, TEE, DP) with policy-based access control. |
AI Governance | Policies and ethical guidelines for AI operations | Strong fit with auditing, traceability, and ethical use enforcement. |