Back to Blog
Viswanath Chirravuri

Who's Securing AI/ML? A Guide to the Global Landscape

AI/ML security is no longer a niche concern — it's an industry-wide effort. Dozens of organizations across the community, government, and private sectors are publishing frameworks, threat taxonomies, guidelines, and tools to help practitioners defend AI systems. This post maps the key players and what they're actually working on.

The organizations below are grouped into three categories: community & open-source bodies that produce freely available standards and guidance; government & regulatory agencies establishing policy and national frameworks; and private-sector initiatives driving industry-led standards and tooling. Each entry links to the primary resource.

Community & Open-Source Organizations

Vendor-neutral bodies producing openly available frameworks, threat taxonomies, and guidance that practitioners can use today.

OWASP AI Exchange

Visit

The OWASP AI Exchange is a continuously updated, community-driven knowledge base covering AI security threats, controls, and governance. It maps directly to ISO/IEC 42001, NIST AI RMF, and the EU AI Act — making it a practical bridge between frameworks and implementation.

Threat ModelingControls CatalogGovernance MappingOpen Standard

Feeds into the OWASP Top 10 for LLM Applications and is actively maintained by the global OWASP community.

MITRE ATLAS

Visit

ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is MITRE's adversarial ML knowledge base, modeled after ATT&CK. It catalogs real-world adversary tactics, techniques, and case studies targeting ML systems — from data poisoning and model evasion to supply chain attacks.

Adversarial MLTactics & TechniquesCase StudiesATT&CK-style

Invaluable for red-teaming AI systems. Actively expanded with new techniques and real incident case studies.

CSA AI Safety Initiative

Visit

The Cloud Security Alliance's AI Safety Initiative produces research, guidance, and tools specifically for AI safety and security in cloud environments. Topics include AI model security, responsible AI, AI governance, and LLM application security.

Cloud AI SecurityResponsible AILLM SecurityAI Governance

Secure-ML Framework (Thales Group)

Visit

Secure-ML is an open-source framework from Thales Group covering end-to-end security for the ML lifecycle — from data collection and model training through deployment and monitoring. It includes a threat taxonomy, 40+ curated open-source security tools mapped to lifecycle stages, and was presented at OWASP LASCON 2024.

ML Lifecycle SecurityThreat TaxonomyOpen-Source ToolsPipeline Hardening

Full disclosure: I was the project leader and key contributor at Thales. The framework is freely available on GitHub.

MLCommons — AI Risk & Reliability

Visit

MLCommons runs the AI Risk & Reliability working group that develops benchmarks and evaluation frameworks to measure safety and robustness in ML models. Their work underpins standardized safety evaluation across industry — including the widely cited MLPerf benchmarks.

Safety BenchmarksModel EvaluationReliabilityStandardization

Particularly relevant for organizations needing quantifiable, reproducible safety metrics for AI models.

Government & Regulatory Bodies

National and intergovernmental agencies setting AI policy, publishing safety standards, and conducting frontier AI research.

UK AI Security Institute (AISI)

Visit

The UK's AI Security Institute is one of the first government bodies dedicated exclusively to AI safety research. It conducts evaluations of frontier AI models for dangerous capabilities, develops safety testing methodologies, and publishes findings to inform global policy. The AISI sits within the UK Department for Science, Innovation and Technology.

Frontier AI EvaluationSafety TestingPolicyDangerous Capabilities

The AISI participated in pre-deployment evaluations of major frontier models from OpenAI, Anthropic, Google DeepMind, and Meta.

NIST AI Risk Management Framework

Visit

NIST's AI RMF provides a voluntary framework for managing AI risks across the full lifecycle — from design to deployment. It defines four core functions: Govern, Map, Measure, and Manage. NIST also publishes the AI 600-1 profile specifically addressing generative AI risks.

Risk ManagementAI LifecycleGenAI ProfileFederal Guidance

NIST AI RMF is the de facto standard referenced by US federal agencies and widely adopted by enterprises globally.

EU AI Act

Visit

The European Union's AI Act is the world's first comprehensive legal framework for AI. It classifies AI systems by risk level (unacceptable, high, limited, minimal) and imposes conformance requirements on high-risk systems — including mandatory security testing, transparency obligations, and human oversight.

RegulationRisk ClassificationComplianceTransparency

Phased enforcement began in 2024. High-risk AI systems must meet conformance requirements before EU market access.

CISA AI Security Guidance

Visit

The US Cybersecurity and Infrastructure Security Agency publishes AI security guidelines for critical infrastructure operators. CISA co-authored joint advisories with international partners on secure AI development and deployment, and maintains resources on AI supply chain security.

Critical InfrastructureJoint AdvisoriesSupply ChainUS Federal

Private-Sector Initiatives

Industry coalitions, vendor-backed alliances, and practitioner programs driving standards, tooling, and training.

CoSAI — Coalition for Secure AI

Visit

CoSAI is an OASIS Open industry consortium founded by Google, IBM, Microsoft, NVIDIA, PayPal, and others to develop AI security standards and open-source tooling. Its workstreams cover software supply chain security for AI, preparing defenders for AI-powered attacks, and AI security governance for enterprises.

Industry StandardsOpen-Source ToolsSupply ChainAI Governance

CoSAI's work is published as OASIS Open Standards — royalty-free and publicly available.

SANS AI Security Blueprint

Visit

SANS Institute's AI Security Blueprint organizes the AI security practitioner's responsibilities into three pillars: Protect AI (securing the AI pipeline and models), Utilize AI (safely integrating AI tools in security workflows), and Govern AI (policy, compliance, and risk management for AI systems). It maps to existing SANS training courses and certifications.

Protect AIUtilize AIGovern AIPractitioner Training

Useful for security teams building an AI security practice — maps directly to skill gaps and training paths.

Google SAIF — Secure AI Framework

Visit

Google's Secure AI Framework (SAIF) is a conceptual framework for securing AI systems throughout their lifecycle. It defines six core elements: expanding strong security foundations to AI, extending detection and response to bring AI into the security ecosystem, automating defenses with AI, harmonizing platform-level controls, adapting controls to address AI-specific risks, and contextualizing AI risks in surrounding business processes.

AI Security FrameworkRisk ManagementSecure-by-DefaultIndustry Guidance

SAIF is complementary to NIST AI RMF and OWASP AI Exchange — it's Google's opinionated take on operationalizing AI security at scale.

Frontier Safety Framework (Google DeepMind)

Visit

Google DeepMind's Frontier Safety Framework defines evaluation criteria for identifying critical capability thresholds in frontier models — particularly around dangerous capabilities like bioweapons assistance or autonomous cyberattacks. It establishes mitigation protocols when models approach those thresholds.

Frontier ModelsDangerous CapabilitiesEvaluationSafety Protocols

Responsible Scaling Policy (Anthropic)

Visit

Anthropic's Responsible Scaling Policy commits to pausing or slowing training runs if evaluations reveal that a model crosses defined safety thresholds (AI Safety Levels, or ASLs). It includes mandatory pre- and post-deployment evaluations and transparency reporting — influencing how other frontier labs approach safety commitments.

Safety Levels (ASL)Pre-deployment EvaluationTransparencyFrontier AI

The Bigger Picture

What strikes me most about this landscape is the degree of convergence. OWASP AI Exchange explicitly maps to NIST AI RMF and the EU AI Act. MITRE ATLAS integrates with ATT&CK, which security teams already use. CoSAI references OWASP and NIST. SANS maps its training paths to real practitioner workflows.

This cross-referencing is intentional and valuable — it means you don't have to pick one framework and ignore the rest. A practical AI security program can start with OWASP AI Exchange for threat coverage, MITRE ATLAS for adversarial techniques, NIST AI RMF for governance structure, and CoSAI's tooling for supply chain controls.

The government layer (AISI, NIST, EU AI Act, CISA) sets the floor. The community layer (OWASP, MITRE, MLCommons) provides the technical depth. The private sector (CoSAI, SANS, frontier lab policies) drives implementation and operational tooling. Together, they form a reasonably coherent — if still evolving — ecosystem.

If you're building or securing AI systems and haven't engaged with at least two or three of these resources, now is the time.