FAQ
Questions about AI AppSec Academy, the three knowledge pillars, content, and how to get involved. Can't find what you're looking for? Reach out on LinkedIn.
About This Site
What is AI AppSec Academy?
AI AppSec Academy is a free knowledge hub for security professionals working at the intersection of AI and application security. It publishes articles, open-source tools, videos, and curated resources — all free, no account needed.
Who creates the content?
All content is created by Viswanath Srinivasan Chirravuri — GSE #335, CISSP, PMP, SANS Associate Instructor (SEC545: GenAI and LLM Application Security), CompTIA Subject Matter Expert, RSA Conference speaker (2024 & 2026), and D.Eng. Cybersecurity Analytics candidate at The George Washington University (expected August 2026).
Is everything really free?
Yes. There are no paywalls, no subscriptions, no accounts required. All articles, open-source projects, tools, videos, and resources are freely accessible to everyone.
What are the three knowledge pillars?
AI AppSec Academy organizes all content under three pillars: (1) AI for AppSec — using AI to enhance security work like code review, threat modeling, and DevSecOps; (2) Securing AI — defending AI systems against adversarial threats, prompt injection, and supply chain risks; (3) Securely Using Vendor AI — safe adoption of OpenAI, Anthropic, Google Gemini, Copilot, and other commercial AI services.
Content & Resources
What topics does the blog cover?
The blog covers LLM security, RAG architecture security, agentic AI design patterns and risks, AI guardrails and validators, ML pipeline security, AI-assisted threat modeling, product security skills for the AI era, and the broader AI security landscape.
What open-source projects are available?
Several open-source projects are published and maintained, including: Secure-ML Framework (ML pipeline security), OWASP Secure Coding Practices (markdown rules), Agentic AI Design Patterns, ML RAG Strategies, and a Claude Code Token Consumption Dashboard. All are on GitHub and free to use.
Are there any videos or recorded talks?
Yes. The Videos page includes recorded SANS webinars on AI/ML security topics, including sessions on RAG strategies, fine-tuning, and GenAI application security. RSA Conference talk recordings are also linked when available.
Are there any custom GPT tools?
Yes. ThreatModelingGPT is a free Custom GPT for AI-assisted threat modeling, available on ChatGPT. ViswanathSChirravuri_GPT is an AI-powered profile assistant for learning about the author's professional background. Both are freely accessible with a ChatGPT account.
How often is new content published?
New articles, open-source updates, and resources are published on a regular basis as research, tools, and talks are completed. Follow on LinkedIn or watch the GitHub repositories for updates.
AI for AppSec
How can AI be used in application security work?
AI can enhance AppSec across multiple workflows: automated secure code review using LLMs, AI-assisted threat modeling and attack surface analysis, intelligent SAST/DAST output triage, AI-driven vulnerability prioritization, and embedding AI security gates into CI/CD pipelines.
What is vibecoding and why does it matter for security?
Vibecoding refers to building software primarily through AI-generated code (e.g., GitHub Copilot, Cursor, Claude). It raises significant security concerns — AI models can generate insecure code, expose secrets in prompts, and create vulnerabilities that automated tools may miss. AI AppSec Academy covers secure practices for AI-assisted development workflows.
What is ThreatModelingGPT?
ThreatModelingGPT is a free Custom GPT that helps security teams perform structured threat modeling using AI. It guides users through STRIDE analysis, generates attack trees, and identifies security requirements based on system descriptions. It is free to use on ChatGPT.
Securing AI
What is the OWASP LLM Top 10?
The OWASP Top 10 for Large Language Model Applications is a list of the most critical security risks in LLM-powered systems. It covers prompt injection, insecure output handling, training data poisoning, model denial-of-service, supply chain vulnerabilities, sensitive information disclosure, insecure plugin design, excessive agency, overreliance, and model theft.
What is prompt injection?
Prompt injection is an attack where malicious content in the input manipulates an LLM into performing unintended actions — bypassing instructions, leaking data, or executing harmful behaviors. It is one of the most critical risks in LLM applications and a primary focus of research published here.
What are agentic AI security risks?
Agentic AI systems (autonomous AI agents that can take actions) introduce new risks including tool-use exploitation, privilege escalation, multi-agent trust boundary violations, memory poisoning, and over-permissive agent scopes. These risks are explored in depth through the Agentic AI Design Patterns project and related blog posts.
What is RAG security?
RAG (Retrieval-Augmented Generation) security covers protecting the knowledge retrieval layer of AI systems — preventing document poisoning, indirect prompt injection through retrieved content, unauthorized access to knowledge bases, and designing secure RAG pipelines.
Securely Using Vendor AI
What are the main risks of using commercial AI APIs?
Key risks include: sending sensitive or PII data to third-party AI providers, poor API key management, lack of audit logging for AI interactions, vendor data retention policies, and over-permissive integration scopes. The Vendor AI Security pillar covers these in detail.
How should organizations govern the use of AI tools like Copilot or ChatGPT?
Organizations should establish AI usage policies covering: what data can be shared with AI tools, approved tools and their configurations, access control and authentication requirements, audit logging requirements, and employee training on safe AI usage.
Which vendor AI services are covered?
Coverage includes OpenAI (ChatGPT, GPT-4, API), Anthropic Claude, Google Gemini, Microsoft Copilot, GitHub Copilot, and the general patterns applicable to any commercial AI service. The focus is on security architecture, not product comparisons.
Contact & Collaboration
How can I contact the author?
Reach out via email at vis@aiappsecacademy.com or connect on LinkedIn at linkedin.com/in/vchirrav. For speaking engagements, collaboration inquiries, or content suggestions, LinkedIn is the best channel.
Can I contribute to the open-source projects?
Yes. All open-source projects are hosted on GitHub and accept contributions via issues and pull requests. See the individual repository READMEs for contribution guidelines.
Can I share or reference this content?
Yes. All content is intended to be shared with the security community. Please attribute the source when referencing articles or tools. For substantial reproduction or commercial use, contact the author first.
Still Have Questions?
Reach out directly via LinkedIn for personalized answers or to suggest new topics.
> Contact on LinkedIn