top of page

Was Your AI Agent Hacked? Or Made a Decision You Can’t Explain?

How CISOs Can Secure Agentic AI Using OWASP, FINOS, and NIST Together


The question is no longer “Is our AI secure?”

The real question is “Do we understand what our agents are doing right now, why they are doing it, and can we stop them if they go wrong?”


Agentic AI systems plan, act, invoke tools, manage memory, and delegate work to other agents. When something fails, it is often not a classic vulnerability exploit. It is an autonomous decision that crossed a boundary no one instrumented, governed, or measured.


This is where traditional application security, and even early AI governance programs, fall short.


Three authoritative publications now define how organizations should approach this problem, each from a different and complementary angle:


  • OWASP Top 10 for Agentic Applications (2026) identifies agent-specific failure modes and provides technical mitigation guidance.

  • FINOS AI Governance Framework (AIR v2) translates AI risk into enterprise-grade controls, ownership, and auditability.

  • NIST AI Risk Management Framework (AI RMF 1.0) provides the management model for governing and measuring AI risk over time.


Used together, they form a complete and defensible approach to securing agentic AI in production.


What each framework is designed to do


OWASP Top 10 for Agentic Applications (2026): threats and technical mitigations


The OWASP Top 10 for Agentic Applications is a security threat taxonomy with mitigation guidance. It focuses on how agentic systems fail in practice and what defenders can do to reduce those risks.


It introduces agent-specific risks such as:

  • ASI-01 Agent Goal Hijack

  • ASI-02 Tool Misuse and Exploitation

  • ASI-03 Identity and Privilege Abuse

  • ASI-06 Memory and Context Poisoning

  • ASI-07 Insecure Inter-Agent Communication

  • ASI-08 Cascading Failures

  • ASI-10 Rogue Agents


For each category, OWASP includes examples, impact analysis, and technical mitigations such as isolation, sandboxing, least-privilege execution, approval gates, and logging.


OWASP’s strength is showing what breaks and how to defend against it at the system and architecture level.


Source: OWASP Top 10 for Agentic Applications (2026), OWASP GenAI Project



FINOS AI Governance Framework (AIR v2): controls and accountability


The FINOS AI Governance Framework is an enterprise governance and risk framework. It focuses on how organizations should approve, operate, and oversee AI systems across operational, security, and regulatory dimensions.


Key characteristics:

  • Explicit AI risk catalog spanning operational, cybersecurity, data, and regulatory risk

  • Defined mitigations mapped to those risks

  • Emphasis on observability, accountability, auditability, and resilience

  • Designed to meet regulated-industry expectations, especially financial services


FINOS mitigations are governance-grade. They define what controls must exist, who owns them, and how they support assurance and compliance.


Source: FINOS AI Governance Framework, AIR v2



NIST AI Risk Management Framework (AI RMF 1.0): risk management and measurement


NIST AI RMF provides a cross-sector, outcome-based risk management model for AI systems. It is voluntary, regulator-neutral, and designed to integrate with existing enterprise risk and cybersecurity programs.


The framework is organized around four core functions:

  • Govern – establish accountability, oversight, and policies

  • Map – understand context, purpose, and risk exposure

  • Measure – assess and track risk outcomes

  • Manage – prioritize and treat risk over time


NIST does not define agent-specific threats or prescribe technical controls. Its value is ensuring organizations define acceptable risk, measure whether controls are effective, and continuously adjust as systems evolve.


Source: NIST AI Risk Management Framework 1.0



Where OWASP, FINOS, and NIST align on agentic AI risk


Autonomy increases blast radius and must be constrained


OWASP identifies autonomy as a core risk multiplier and provides mitigations such as execution limits, approval gates, and constrained planning.


FINOS complements this with governance controls like:

  • Agent authority least-privilege frameworks

  • Defined escalation and approval paths

  • Clear operational boundaries for AI systems


NIST reinforces this by requiring organizations to map autonomy to risk tolerance and measure whether outcomes remain acceptable over time.


FINOS reference:Mitigation MI-18, Agent authority least privilege framework



Tools and integrations are a primary attack surface


OWASP elevates tool misuse and integration abuse as a top agentic risk and recommends mitigations such as allowlists, isolation, and execution controls.


FINOS provides enterprise-level mitigations for:

  • Tool chain validation and sanitization

  • Governance of third-party services and dependencies

  • Control of agent permissions at runtime


NIST adds the expectation that these risks are not just approved once, but measured and managed continuously.


FINOS reference:Mitigation MI-19, Tool chain validation and sanitization



Identity, privilege, and secrets are central to agent security


OWASP explicitly calls out identity and privilege abuse in agentic systems and provides technical mitigation guidance.


FINOS treats agents as privileged non-human actors and defines mitigations to:

  • Protect credentials and secrets

  • Enforce scoped and revocable permissions

  • Prevent credential discovery or exfiltration


NIST frames this as an accountability and security outcome that must be monitored throughout the system lifecycle.


FINOS reference:Mitigation MI-23, Agentic system credential protection framework



Observability is required to make mitigations effective


OWASP highlights insufficient logging and observability as a systemic failure in agentic systems and recommends detailed runtime visibility.


FINOS formalizes this through explicit mitigations:

  • AI system observability

  • Agent decision audit and explainability


NIST reinforces that without observability, organizations cannot measure risk or demonstrate control effectiveness.


FINOS references:MI-4, AI system observabilityMI-21, Agent decision audit and explainability



Multi-agent systems fail through cascades


OWASP introduces risks related to insecure inter-agent communication and cascading failures, with mitigations focused on isolation and containment.


FINOS explicitly models multi-agent trust boundary violations and provides governance-grade mitigations for segmentation and isolation.


NIST frames cascading failures as systemic risk that must be understood, measured, and governed at the organizational level.


FINOS references:Risk RI-28, Multi-agent trust boundary violationsMitigation MI-22, Multi-agent isolation and segmentation



How Security teams should use all three frameworks together


A practical operating model looks like this:


  1. Use OWASP to identify agentic threats and apply technical mitigations at the system and architecture level.

  2. Use FINOS to ensure those mitigations become governed, owned, auditable controls.

  3. Use NIST to measure effectiveness, manage residual risk, and adapt decisions as agent autonomy and scope evolve.


Put simply:


  • OWASP defines threats and technical mitigations

  • FINOS defines enterprise controls and accountability

  • NIST defines risk governance and measurement



Final takeaway


OWASP shows how agentic AI fails and how to mitigate those failures technically.

FINOS ensures those mitigations are governed, owned, and auditable.

NIST ensures leadership can measure and manage AI risk over time.


For organizations deploying agentic AI in production, security requires all three.


Anything less leaves a blind spot at runtime in governance, or at the board level.

 
 
 

Recent Posts

See All
OWASP’s Latest Top 10 - A Fundamental Shift

The OWASP Top 10 for Agentic Applications 2026 is not an evolution of application security. It is a recognition that many of the assumptions security teams rely on no longer hold. For decades, securi

 
 
 

Comments


bottom of page