> cat resources/index.md
Free Resource Library
Everything here is free. No account, no paywall. Knowledge organized across three pillars covering the full scope of AI and AppSec security work.
Open access — always freeAI for AppSec
Leverage AI to make your AppSec work faster and more effective
Using AI and large language models to enhance application security workflows — from AI-assisted code review and automated threat modeling to smarter vulnerability triage, SAST tuning, and AI-augmented DevSecOps pipelines.
AI-Assisted Secure Code Review
Using LLMs to identify security vulnerabilities in code, automate code review at scale, and give developers real-time secure coding feedback.
AI-Powered Threat Modeling
Augmenting threat modeling with LLMs — including ThreatModelingGPT, automated STRIDE analysis, and AI-driven attack surface enumeration.
Vulnerability Triage & Prioritization
AI-driven approaches to SAST/DAST output analysis, false positive reduction, and risk-based vulnerability prioritization.
AI in DevSecOps Pipelines
Integrating AI security tools into CI/CD, using LLMs as security gates, and automating security policy enforcement.
Vibecoding & Secure AI Development
Security implications of AI-generated code, guardrails for vibe coding workflows, and building secure products with AI coding assistants.
Custom Security GPTs & Agents
Building purpose-built AI tools for security work — threat modeling GPTs, red team assistants, and security policy chatbots.
Securing AI
Defend AI systems against adversarial threats and misuse
Protecting AI systems and ML pipelines from adversarial attacks, exploitation, and misuse — covering OWASP LLM Top 10, prompt injection, RAG security architecture, model supply chain risks, agentic AI vulnerabilities, and end-to-end MLSecOps.
LLM Application Security (OWASP Top 10)
Prompt injection, insecure output handling, training data poisoning, model denial-of-service, supply chain vulnerabilities, and the full OWASP LLM Top 10.
Agentic AI Security
Securing autonomous AI agents — tool-use exploitation, privilege escalation, multi-agent trust boundaries, memory poisoning, and safe agent design patterns.
RAG Security & Architecture
Securing Retrieval-Augmented Generation systems — document poisoning, indirect prompt injection, access control for knowledge bases, and secure RAG design.
ML Pipeline Defense (Secure-ML)
End-to-end security for ML pipelines — data poisoning, model tampering, supply chain attacks on ML dependencies, and inference-time defenses.
AI Red Teaming
Adversarial testing methodologies for AI systems — jailbreaking, adversarial examples, model extraction, and structured red team exercises.
AI Observability & Guardrails
Monitoring LLM applications in production — guardrails frameworks, input/output validation, anomaly detection, and safety classifiers.
Securely Using Vendor AI
Adopt commercial AI services safely and compliantly
Guidance for organizations adopting commercial AI services — OpenAI, Anthropic Claude, Google Gemini, Microsoft Copilot, and others. Covering data governance, API security, access control, audit logging, and compliance considerations.
API Key Management & Access Control
Secure API key storage, rotation, scoping, and least-privilege access for AI service integrations.
Data Privacy & Governance
What data goes to vendor AI APIs, data residency, opt-out configurations, PII scrubbing, and organizational data governance policies.
Security Review of AI Integrations
Threat modeling vendor AI integrations, reviewing third-party AI plugins, and assessing supply chain risk in AI-powered products.
Copilot & AI Coding Tool Security
Security implications of GitHub Copilot, Cursor, and similar tools — secret leakage in prompts, insecure code suggestions, and safe usage policies.
Audit Logging & Monitoring
Logging AI API calls, monitoring for abuse or data exfiltration, and building observability into AI-powered application layers.
Compliance & Policy Frameworks
Using AI services under SOC 2, ISO 27001, HIPAA, and GDPR constraints — vendor agreements, data processing addenda, and AI usage policies.
> ./contribute --open-source
More content is always being added
Follow on LinkedIn or watch the GitHub repos to get notified when new articles, tools, and resources are published.