OWASP AI Vulnerability Scoring System integrates AIUC-1

AIUC-1 x OWASP Announcement

Emil Lassen & OWASP AIVSS leadership — Feb 27, 2026 — 5 min read

OWASP AI Vulnerability Scoring System integrates AIUC-1

The partnership enables organizations to quickly identify risks associated with AI agent deployments, quantify those risks, and prioritize mitigations with AIUC-1

With agentic AI dominating headlines – from OpenClaw and Moltbook to Meta’s acquisition of Manus – enterprises are racing to deploy autonomous agents as digital workers. But the properties that make agents valuable – such as autonomy, persistence, and broad tool access – are exactly what make vulnerabilities in them so much harder to contain.

The same SQL injection vulnerability creates radically different risk profiles depending on deployment context. In a read-only reporting agent, it’s a data leak. In an autonomous trading agent with broad tool access and persistent memory, it becomes a systemic threat capable of cascading failures across connected systems.

Today, organizations have no standardized way to quantify these differences. Without scoring mechanisms that account for agent-specific factors amplifying the risk surface - such as autonomy, tool access, and memory persistence - security teams struggle to prioritize mitigations or communicate risk severity to stakeholders.

Introducing the OWASP AI Vulnerability Scoring System

The OWASP AI Vulnerability Scoring System (AIVSS) was developed specifically to solve this problem. It provides a comprehensive risk scoring methodology that extends traditional vulnerability assessment (CVSS) by quantifying how agentic capabilities amplify security risks.

AIVSS produces standardized numerical scores (0-10) that combine technical vulnerability severity with agent-specific characteristics. This enables organizations to quickly identify top risks and prioritize resources to mitigate those - while also serving as a concrete, exec-level risk reporting tool.

AIVSS scoring is built on ten core security risks unique to autonomous agents:

  1. Agentic AI Tool Misuse - Compromised tool selection, insecure invocation, or lack of oversight
  2. Agent Access Control Violation - Permission escalation, credential mismanagement, or role inheritance exploitation
  3. Agent Cascading Failures - Cross-system exploitation where one compromised agent propagates damage
  4. Agent Orchestration and Multi-Agent Exploitation - Attacks targeting coordination mechanisms between agents
  5. Agent Identity Impersonation - Spoofing of agent or human identities through deepfakes or credential theft
  6. Agent Memory and Context Manipulation - Poisoning persistent memory or exploiting context drift
  7. Insecure Agent Critical Systems Interaction - Unauthorized manipulation of infrastructure, IoT, or operational technology
  8. Agent Supply Chain and Dependency Risk - Compromised models, libraries, or third-party tools
  9. Agent Untraceability - Inability to audit agent decision chains or attribute actions
  10. Agent Goal and Instruction Manipulation - Prompt injection and semantic hijacking of agent objectives

Highlighting amplification factors for AI agent risk

AIVSS recognizes that agentic capabilities don’t just add new risks - they act as force multipliers that expand the blast radius of existing vulnerabilities.

The framework quantifies ten amplification factors on a standardized scale:

  • Execution Autonomy
  • External Tool Control Surface
  • Natural Language Interface
  • Contextual Awareness
  • Behavioral Non-Determinism
  • Opacity & Reflexivity
  • Persistent State Retention
  • Dynamic Identity
  • Multi-Agent Interactions
  • Self-Modification

Each factor receives a score (0.0, 0.5, or 1.0) based on the agent’s actual implementation. These scores feed into a mathematical model that calculates the Agentic AI Risk Score (AARS), which combines with the technical CVSS base score to produce the final AIVSS rating.

This transforms agent security from qualitative descriptions into quantifiable metrics that CISOs can defend to the board and integrate into existing risk management frameworks.

Integrating AIUC-1 to proactively mitigate risks

AIUC-1 - the standard for AI agent safety, security, and reliability - offers a comprehensive list of controls, enabling security leaders to quickly move from risk identification to risk mitigation. Because both AIUC-1 and the AIVSS are focused on AI agents specifically, the risks map directly to controls and doesn’t require an additional layer of translation.

The power of AIUC-1 lies in its specificity. Rather than broad control objectives like “implement access controls,” AIUC-1 requires organizations to demonstrate concrete practices: technical controls to prevent tool calls in AI systems from executing unauthorized actions, groundedness filtering to mitigate hallucinations, monitoring & logging to ensure agent actions are traceable.

The complete mapping between AIVSS and AIUC-1 creates a direct workflow from risk identification to control implementation. When AIVSS scoring reveals high-severity risks in specific categories - for example, Agent Access Control Violation (AIVSS score 9.7) - the mapping points security teams to the precise AIUC-1 requirements that mitigate those risks.

Explore the AIUC-1 to AIVSS Crosswalk Dashboard

Browse the interactive mapping between AIUC-1 principles and AIVSS Core Risks. Filter by risk category, review coverage heatmaps, and export data.

A new approach for proactive AI agent risk management

With this partnership, organizations can now follow a structured workflow:

Step 1: Quantify risk using AIVSS. Score agent deployments against the ten core risks, incorporating system-specific amplification factors to generate numerical severity ratings.

Step 2: Identify relevant AIUC-1 requirements. Use the mapping to translate high-severity AIVSS findings into specific AIUC-1 requirements that address root causes.

Step 3: Obtain AIUC-1 certification. Pursue full AIUC-1 certification including required technical testing, creating third-party validation that controls have been implemented and technical test results demonstrating that controls work when agents are deployed in the real world.

This workflow transforms AI agent security from reactive incident response into proactive risk management. Security teams gain the language and metrics to justify budget allocation. Auditors gain clear evaluation criteria. Executives gain confidence that agent deployments meet institutional risk tolerances.

The AIVSS-AIUC-1 integration is available now via the AIVSS website and as part of AIUC-1 Crosswalks.


About AIUC-1: The standard for AI agent safety, security, and reliability. Built with Technical Contributors from trusted institutions including MIT, Stanford, Google Cloud, Cisco, Orrick, and the Cloud Security Alliance. Read more at aiuc-1.com.

About OWASP AIVSS: The OWASP AI Vulnerability Scoring System (AIVSS) is a standardized framework for assessing and quantifying security risks in AI systems, with a specific focus on agentic AI architectures. Read more at aivss.owasp.org.


Authors: Vineeth Sai Narajala (AIVSS), Ken Huang (AIVSS), Emil Bender Lassen (AIUC-1), Abby Shen (AIUC-1)


Leadership & Founding Members

Project Leadership

Current Leaders

Ken Huang

Ken Huang - Project Lead

Michael Bargury

Michael Bargury - Project Lead

Vineeth Sai Narajala

Vineeth Sai Narajala - Project Lead

Vineeth Sai Narajala

Bhavya Gupta - Project Lead

Founding Members

Names are listed alphabetically by last name.

The OWASP AIVSS project was established through the collaborative efforts of security experts and AI specialists who recognized the need for a standardized vulnerability scoring system for AI systems. We are grateful to the following founding members for their contributions:

Sunil Agrawal

Sunil Agrawal

Chief Information Security Officer
Glean

David Ames

David Ames

Partner
PwC

Michael Bargury

Michael Bargury

Founder and CTO
Zenity

Joshua Beck

Joshua Beck

Application Security Architect
SAS

Manish Bhatt

Manish Bhatt

Security Researcher
Amazon Kuiper Security

Mark Breitenbach

Mark Breitenbach

Security Engineer
Dropbox

Anat Bremler-Barr

Anat Bremler-Barr

Professor of Computer Science
Tel Aviv University

Siah Burke

Siah Burke

HIPAA Security Officer
Siah.ai

David Campbell

David Campbell

AI Security
Scale AI

Ying-Jung Chen

Ying-Jung Chen

AI safety researcher, PhD
Georgia Institute of Technology

Anton Chuvakin

Anton Chuvakin

Security Solution Strategy
Google

Jason Clinton

Jason Clinton

CISO
Anthorphic

Adam Dawson

Adam Dawson

Staff AI Security Researcher
Dreadnode

Ron F. Del Rosario

Ron F. Del Rosario

VP-Head of AI Security
SAP

Walker Lee Dimon

Walker Lee Dimon

AI Security Researcher
MITRE

Marissa Dotter

Marissa Dotter

AI Security Researcher
MITRE

Leon Derczynski

Leon Derczynski

Principal Research Scientist
NVIDIA

Dan Goldberg

Dan Goldberg

ISO Market Lead
Omnicom

David Haber

David Haber

CEO
Lakera

Idan Habler

Idan Habler

Staff AI/ML Security Researcher
Intuit

Jason Haddix

Jason Haddix

Founder
Arcanum Information Security

Keith Hoodlet

Keith Hoodlet

Director of AI/ML & AppSec
Trail of Bits

Ken Huang

Ken Huang

AIVSS Project Lead
OWASP

Chris Hughes

Chris Hughes

CEO
Aquia

Charles Iheagwara

Charles Iheagwara

AI/ML Security Leader
AstraZeneca

Krystal Jackson

Krystal Jackson

Researcher
Center for Long-Term Cybersecurity, UC Berkeley

Sushmitha Janapareddy

Sushmitha Janapareddy

Director - Security Integrations
American Express

Rob Joyce

Rob Joyce

Former Cybersecurity Director of NSA, Advisor to PwC
PwC

Diana Kelley

Diana Kelley

CISO
Noma Security

Prashant Kulkarni

Prashant Kulkarni

Lead AI Security Research Engineer
Google Cloud

Mahesh Lambe

Mahesh Lambe

Founder
MIT, Unify Dynamics

Edward Lee

Edward Lee

Vice President, Lead AI Security
JP Morgan

Nate Lee

Nate Lee

CEO
Cloudsec.ai

Vishwas Manral

Vishwas Manral

CEO
Precize.ai

Daniela Muhaj

Daniela Muhaj

Executive-in-Residence for Research & Development
AI 2030

Om Narayan

Om Narayan

AI Security Researcher
AWS

Vineeth Sai Narajala

Vineeth Sai Narajala

Application Security
AWS

Advait Patel

Advait Patel

Senior Site Reliability Engineer (DevSecOps \+ Cloud \+ AIOps)
Broadcom, IEEE

Alex Polyakov

Alex Polyakov

CEO
adversa.ai

Ramesh Raskar

Ramesh Raskar

Professor & Director
MIT Media Lab

Tal Shapira

Tal Shapira

Co-Founder & CTO
Reco AI

Akram Sheriff

Akram Sheriff

Senior AI/ML Software Engineering Leader
Cisco

Samantha Siau

Samantha Siau

Security and Compliance
Anthropic

Kevin Simmonds

Kevin Simmonds

Partner on AI Offensive Security
PWC

Martin Stanley

Martin Stanley

NIST AI RMF Lead
Independent

Omar A. Turner

Omar A. Turner

General Manager of Security
Microsoft

Apostol Vassilev

Apostol Vassilev

AI Research Team Supervisor
NIST

Matthew Versaggi

Matthew Versaggi

AI Fellow
White House Presidential Innovation Fellow

David Webb

David Webb

Agency Cybersecurity Officer
Cybersecurity and Infrastructure Security Agency

Dennis Xu

Dennis Xu

Research VP, AI
Gartner

Xiaochen Zhang

Xiaochen Zhang

Executive Director and Chief Responsible AI Officer
AI 2030

Recognition

We extend our gratitude to all founding members who have contributed to establishing this crucial framework for AI security assessment. Their vision and dedication have been instrumental in shaping the AIVSS project.

Get Involved

Interested in contributing to the AIVSS project? We welcome new contributors and leaders. Please see our Contribution Guidelines for more information on how to get involved.



AIVSS Calculator Demo

Try the AIVSS Calculator

Experience the AIVSS scoring system in action with our interactive calculator. This demo allows you to:

  • Calculate vulnerability scores for AI systems
  • Understand the impact of different security factors
  • Generate detailed reports based on your inputs

Distinguished Review Board

The OWASP AIVSS Distinguished Review Board comprises world-renowned cybersecurity leaders, former government officials, and industry pioneers who provide strategic guidance and expert oversight for the AI Vulnerability Scoring System framework.

Rob Joyce

Rob Joyce

Advisor to PwC and OpenAI

Former Special Assistant to the President and Cybersecurity Coordinator

Jason Clinton

Jason Clinton

Deputy CISO

Anthropic

Amy R. Steagall

Amy R. Steagall

Chief Information Security Officer

Stanford University

Martin Stanley

Martin Stanley

AI Risk Management Framework Lead

NIST

Apostol Vassilev

Apostol Vassilev

Research Supervisor

NIST

Andrew Coyne

Andrew Coyne

CISO, Banner Health

Former CISO, Mayo Clinic

Kevin Rocque

Kevin Rocque

Managing Director/Executive Vice President

Global Technology Risk Officer, TD Bank

Jeff Williams

Jeff Williams

Former Global OWASP Chair

Founder and CTO, Contrast Security

Michael Tran Duff

Michael Tran Duff

University Chief Information Security and Data Privacy Officer

Harvard University


The Distinguished Review Board provides expert guidance to ensure the AIVSS framework meets the highest standards of rigor and practical applicability in AI security assessment.


Announcements


OWASP AI Vulnerability Scoring System integrates AIUC-1

AIUC-1 x OWASP Announcement

Feb 27, 2026 — The partnership enables organizations to quickly identify risks associated with AI agent deployments, quantify those risks, and prioritize mitigations with AIUC-1. Explore the interactive crosswalk dashboard mapping AIUC-1 principles to AIVSS Core Risks.

Read the full announcement


OWASP AIVSS: The Kickoff Meeting

OWASP AIVSS Project December 2026 Update & Important Deadlines


Publications

AIVSS Scoring System For OWASP Agentic AI Core Security Risks v0.5

First page of AIVSS publication

📄 Download PDF: AIVSS v0.5

Overview

This foundational document introduces the OWASP AI Vulnerability Scoring System (AIVSS), a standardized framework for assessing and quantifying security risks in AI systems, with a specific focus on agentic AI architectures. Version 0.5 represents the initial release of our comprehensive scoring methodology.

Key Features

  • Standardized Risk Assessment: Provides a consistent methodology for evaluating AI vulnerability severity across different systems and contexts
  • Agentic AI Focus: Tailored specifically for the unique challenges and risk vectors present in autonomous AI agents
  • Industry Integration: Designed to complement existing security frameworks while addressing AI-specific vulnerabilities
  • Practical Implementation: Includes actionable guidelines and scoring criteria for security professionals

What’s Inside

  • Scoring Framework: Detailed methodology for calculating AIVSS scores based on multiple risk factors
  • Risk Categories: Comprehensive coverage of AI-specific vulnerabilities including model manipulation, data poisoning, and agent misalignment
  • Assessment Guidelines: Step-by-step instructions for conducting AIVSS evaluations
  • Case Studies: Real-world examples demonstrating the application of AIVSS in various scenarios
  • Integration Guidance: Best practices for incorporating AIVSS into existing security workflows

Target Audience

This document is designed for security professionals, AI developers, risk assessors, and organizations seeking to implement robust security measures for their AI systems, particularly those involving autonomous agents.


This publication is actively maintained by the OWASP AIVSS project team. For updates, contributions, or questions, please visit our project repository.