Back to Blog
Viswanath ChirravuriSANS Institute Webinar

ML RAG, Fine-tuning and Security

A 1-hour technical webinar exploring the evolution of machine learning context-based prompting, Retrieval-Augmented Generation (RAG), Agentic AI, and fine-tuning — along with the security risks at every layer. Presented live at SANS Institute on February 18, 2026.

Date

February 18, 2026

Time

3:00 PM – 4:00 PM EST (1 hour)

Format

Live Technical Presentation

Watch the RecordingDownload Slides

Free registration required via SANS portal

Learning Objectives

Architectural Evolution

Understand the progression from basic context-based prompting through Retrieval-Augmented Generation to full model fine-tuning — and why each step exists.

Security Risks at Each Layer

Recognize the distinct technical and security risks introduced at the prompting, RAG, and fine-tuning layers — from prompt injection to training data poisoning.

Secure Fine-Tuning in Practice

Gain practical insights into implementing secure fine-tuning pipelines, including data integrity controls, model supply chain considerations, and deployment hardening.

Webinar Overview

AI systems no longer rely on a single technique to answer questions or perform tasks. Modern production deployments chain together context-based prompting, Retrieval-Augmented Generation (RAG), and fine-tuning — each layer adding capability, and each layer introducing new attack surfaces.

This webinar takes a ground-up approach: starting with why naive prompting is insufficient, progressing through the RAG architectural spectrum (naive, advanced, modular, and agentic), and arriving at fine-tuning — where the model itself is modified. At each stage, we examine not just the technical mechanics but the security threat model that security engineers and ML practitioners need to understand.

The session draws on real-world RAG architectures, threat modeling frameworks for ML systems, and practical hardening techniques applicable to enterprise deployments today.

Topics Covered

  • Context-based prompting fundamentals and limitations
  • Retrieval-Augmented Generation (RAG) architectures and patterns
  • Naive RAG vs. Advanced RAG vs. Modular RAG
  • Agentic AI and autonomous retrieval decisions
  • RAG security threats: document poisoning, indirect prompt injection, data leakage
  • Vector database attack surfaces and hardening strategies
  • Fine-tuning approaches: full fine-tuning, PEFT, LoRA, QLoRA
  • Training data integrity and supply chain risks for fine-tuned models
  • Security controls for MLOps pipelines and model artifact management
  • Threat modeling across the ML context stack

The ML Context Stack

The webinar frames the discussion around three architectural layers, each building on the previous:

01

Context-Based Prompting

The baseline: providing context directly in the prompt. Simple and effective for contained scenarios, but constrained by context window limits and vulnerable to direct prompt injection. No persistent knowledge beyond what is manually included.

02

Retrieval-Augmented Generation (RAG)

Dynamic context injection at inference time via vector retrieval. Scales to large corpora and keeps knowledge current without retraining. Introduces new attack surfaces: document poisoning, indirect prompt injection via retrieved content, vector database access control failures, and data leakage across retrieval boundaries. Agentic RAG adds autonomous retrieval decisions, amplifying both capability and risk.

03

Fine-Tuning

Adapting model weights on domain-specific data using techniques like PEFT, LoRA, and QLoRA. Encodes knowledge directly into the model rather than retrieving it at runtime. Security risks shift to the training pipeline: data poisoning, model backdoors, supply chain integrity of base models, and artifact provenance. Requires a robust MLSecOps posture.

RAG Security: Key Threat Areas

A significant portion of the webinar focuses on RAG security, examining threats across the full retrieval pipeline:

Document Poisoning

Malicious content injected into the knowledge base to manipulate retrieved context and influence model outputs.

Indirect Prompt Injection

Attack instructions embedded in retrieved documents that hijack the model's behavior without direct user access.

Data Leakage Across Retrieval Boundaries

Unauthorized disclosure of sensitive documents when access controls are not enforced at the retrieval layer.

Vector Database Attacks

Exploiting embedding similarity thresholds, index poisoning, or misconfigured access policies in vector stores.

Training Data Poisoning

Corrupting fine-tuning datasets to embed backdoors or degrade model behavior on specific inputs.

Model Supply Chain Risks

Integrity failures in base model provenance, adapter weight tampering, and insecure model artifact registries.

Access the Recording & Slides

The full recording and slide deck are available on the SANS Institute webcasts portal. Free registration is required to access the content.

SANS Institute Webcasts Portal

The recording, slides (PDF download), and session transcript are hosted on the SANS webcasts portal. Create a free SANS account to access all webcast content.

Related Content

Need Help Securing Your ML Pipeline?

Whether you're hardening a RAG deployment, designing a secure fine-tuning pipeline, or building an ML threat model, I offer hands-on consulting sessions tailored to your architecture.

About Viswanath Chirravuri

GSE #335, CISSP, and CompTIA SME specializing in AI/ML Security, Application Security, and DevSecOps. SANS Associate Instructor (SEC545: GenAI and LLM Application Security) and Webinar Presenter. Currently pursuing D.Eng. in Cybersecurity Analytics at George Washington University (expected August 2026). RSA Conference 2024 & 2026 Speaker.

Learn more →