Back to Blog Lobby

Gartner AI TRiSM & Duality

What is AI TRiSM?

AI TRiSM (Trust, Risk, and Security Management) is a framework introduced by Gartner to ensure the security, privacy, and governance of AI applications throughout their lifecycle. It aims to address the key risks and compliance challenges associated with deploying artificial intelligence at scale. In simple words, think of AI TRiSM like a safety net for AI, making sure it behaves properly, is safe to use, and follows the rules just like a well-guarded playground.

The Layers of AI TRiSM:

The AI TRiSM framework is structured as a hierarchical security stack, much like a pyramid, where each layer builds upon the foundational security established by the layers beneath it. This layered structure allows for secure and accountable operations from the ground up.

  1. Traditional Technology Protection — Focuses on foundational security measures for operations that use data, including encryption at rest and in transit, secure APIs, and access control to prevent unauthorized access.
  2. Infrastructure and Stack — This involves the hardware, cloud environments, and software layers that host and run AI workloads, ensuring they are protected from threats.
  3. Information Governance — Manages the lifecycle of data used in AI, including data classification, privacy controls and data access, and compliance with regulations like GDPR or the EU AI Act.
  4. AI Runtime Inspection and Enforcement — Ensures that AI models are monitored during runtime to detect anomalies, enforce policies, and mitigate risks proactively.
  5. AI Governance — Aims to govern AI activities, ensuring alignment with business policies, accountability, transparency, and ethical use of AI.

Market Trends According to Gartner

As data-driven operations continue to expand, Gartner has identified key market trends that are shaping how organizations secure, govern, and manage their digital assets. These trends illustrate the growing emphasis on unified platforms, strategic budgeting, and the expansion of runtime security measures.

Teams Shift — AI engineering teams are becoming responsible for privacy and security aspects, directly managing TRiSM strategies.

  • Example: Similar to how DevSecOps emerged as a shift in responsibility for security within development teams, AI engineers are now tasked with integrating security measures into AI development lifecycles.

Budget Authority Changes — The budget for AI privacy, security, and risk initiatives is increasingly under the CTO or CIO, emphasizing strategic alignment with IT governance.

  • Example: This is similar to how cyber risk budgets moved to the CISO’s control, reflecting its strategic importance.

Unified Runtime Inspection Systems — Integrated platforms are emerging to centralize runtime inspection and policy enforcement.

  • Example: In cybersecurity, SOC (Security Operations Centers) centralize monitoring and enforcement across distributed systems, much like AI TRiSM aims to centralize oversight for AI processes.

AI Hosting Providers Expand TRiSM Services — Hosting providers are embedding TRiSM functionalities to secure AI models.

  • Example: Cloud providers like AWS and Azure now offer embedded security services, mirroring the growth of TRiSM services for AI models.

Market Consolidation — Mergers and acquisitions are accelerating as AI governance and runtime inspection converge.

  • Example: Much like how endpoint security vendors consolidated into larger cyber platforms, AI governance and runtime platforms are merging to provide unified solutions.

Traditional Technology Protection

What It Means

Traditional Technology Protection encompasses foundational security measures to safeguard data-driven operations from unauthorized access and threats. These measures are designed to ensure data confidentiality, integrity, and availability across its lifecycle.

Examples:

  1. Encryption at Rest and in Transit: Ensures that sensitive data remains encrypted both when stored and during transmission to prevent unauthorized access.
  2. Role-Based Access Control (RBAC): Allows organizations to define permissions based on user roles, minimizing unauthorized access to critical data.
  3. Secure APIs: Protects data exchanges between applications by enforcing authentication and authorization checks.

How Duality Fits

Duality does not specifically provide Traditional Technology Protection solutions directly. Instead, it utilizes industry-standard security mechanisms as part of its platform integration. For example, Duality enforces secure communication, role-based access control (RBAC), and secure API interactions as best practices, leveraging existing technologies from cloud providers and cybersecurity platforms.

Infrastructure and Stack

What It Means

Infrastructure and Stack encompasses the foundational hardware and software components that support data-driven operations. This includes computing resources, storage systems, networking, and the software frameworks that facilitate data processing and analysis. A robust infrastructure ensures scalability, reliability, and security for applications handling sensitive data.

Examples:

  1. Cloud Computing Platforms: Services like AWS, Azure, and Google Cloud provide scalable and secure environments for data storage and processing.
  2. Containerization Technologies: Tools such as Docker and Kubernetes enable efficient deployment and management of applications across diverse computing environments.
  3. Network Security Appliances: Firewalls and intrusion detection systems protect the infrastructure from unauthorized access and cyber threats.

How Duality Fits

Duality does not specifically build or manage infrastructure solutions. Instead, it leverages best practices and technologies from leading cloud and cybersecurity providers. Duality integrates with Confidential Computing environments and enables secure AI workloads through Trusted Execution Environments (TEEs) and cryptographic methods, directly aligning with the infrastructure needs of secure AI systems.

Information Governance

What It Means

Information Governance (IG) refers to the overarching framework that manages the lifecycle of information within an organization, ensuring its accuracy, accessibility, and compliance with relevant regulations. This encompasses policies, procedures, and technologies that oversee data creation, storage, usage, and disposal, aiming to mitigate risks and enhance the value of information assets.

Examples:

  1. Compliance Monitoring: Regular audits to ensure adherence to regulations like GDPR or HIPAA to ascertain data security and privacy.
  2. Data Classification: Assigning sensitivity levels to data to determine appropriate handling procedures.
  3. Incident Response Plans: Preparing protocols for responding to data breaches or other information-related incidents.

How Duality Fits

Duality excels in this layer by leveraging PETs to protect data during model training, inference, and federated analytics. Aligned with GDPR and other privacy regulations, Duality enables computation on sensitive assets from multiple entities.

Duality also utilizes data governance policies to determine exactly who can access and perform computations on data, ensuring clear boundaries and compliance with privacy standards.

AI Runtime Inspection and Enforcement

What It Means

AI Runtime Inspection and Enforcement involves the real-time monitoring and regulation of AI systems during their operation. This ensures that AI models behave as intended, adhere to established policies, and do not produce unintended or harmful outcomes. By continuously inspecting AI outputs and enforcing rules, organizations can promptly detect anomalies, prevent policy violations, and maintain trust in AI applications. Importantly, it also safeguards against potential threats posed by malicious insiders in partner organizations, preventing unauthorized exposure or manipulation of sensitive data during collaborative AI operations.

Examples:

  1. Policy Enforcement: Applying predefined rules to restrict AI behaviors that could lead to compliance breaches.
  2. Anomaly Detection: Identifying unusual patterns in AI outputs that may indicate errors or malicious activity.
  3. Real-time Alerts: Notifying administrators immediately when AI systems deviate from expected behaviors.

How Duality Fits

Duality supports runtime enforcement by enabling secure inference and training execution, which supports runtime assurance by design/ Data is always protected, and policies assure execution within predefined approved rules. This is achieved using a combination of multiple PETs like FHE, FL, TEE, and DP to ensure data is processed securely and without exposure to unauthorized parties, even during cross-institutional collaborations.

AI Governance

What It Means

AI Governance encompasses the set of policies, procedures, and ethical guidelines that direct the development, deployment, and management of AI technologies. It aims to ensure that AI systems are transparent, accountable, and aligned with organizational values and societal norms. Effective AI governance mitigates risks associated with AI, such as bias or unintended consequences, and promotes responsible innovation.

Examples:

  1. Bias Audits: Regularly evaluating AI models to detect and correct discriminatory patterns.
  2. Transparency Reports: Documenting AI decision-making processes to enhance understanding and trust.
  3. Regulatory Compliance: Ensuring AI systems meet legal standards and industry regulations.

How Duality Fits

Duality provides policy-based access control and governance capabilities that ensure models are used ethically and securely. Built-in auditing and traceability features make it easier to validate model usage and compliance.

Summary

The AI TRiSM framework represents a multi-layered approach to managing the security, trust, and governance of data-driven operations. By addressing both traditional security measures and emerging AI-specific risks, it provides a structured method to safeguard sensitive information across its lifecycle and usage.

Duality fits perfectly within Gartner’s TRiSM framework by providing crucial building blocks that address key areas of data protection, runtime assurance, and governance. While Duality does not address all aspects of the framework, its innovative technologies, such as PETs and Confidential Computing, enable secure, privacy-preserving data operations that align with TRiSM’s core principles.

Below is a breakdown of the key layers of AI TRiSM, their definitions, and how Duality fits within each of them:

LayerDefinitionDuaity’s Fit
Traditional Technology ProtectionFoundational security for data-driven operations, including encryption, RBAC, and secure APIsLeverages best practices in secure communication, RBAC, and API protection.
Infrastructure and StackHardware and software environments for data processing and AI workloadsIntegrates with Confidential Computing environments and uses TEEs for secure processing.
Information GovernanceManaging data privacy, compliance, and lifecycle securityStrong fit with federated analytics, PETs, and governance policies to define data access.
AI Runtime Inspection and EnforcementReal-time monitoring and policy enforcement for data operationsSupports secure inference and execution through PETs (FHE, TEE, DP) with policy-based access control.
AI GovernancePolicies and ethical guidelines for AI operationsStrong fit with auditing, traceability, and ethical use enforcement.

Sign up for more knowledge and insights from our experts