> ls ~/open-source --contributions
Open-Source Contributions
Building security frameworks, tools, and research in the open. Contributing to the community that defends AI/ML systems and application security at scale.
Secure-ML Framework
thalesgroup/secure-ml
A comprehensive framework for securing machine learning systems across the entire ML lifecycle. Developed as an industry-leading resource, it provides security policies, threat models, privacy-preserving techniques, and a curated collection of 40+ open-source security tools for ML.
- >ML Security Policy framework covering datasets, models, platforms, and compliance
- >Privacy-preserving techniques: Differential Privacy, Federated Learning, Homomorphic Encryption, SMPC
- >40+ curated open-source tools for adversarial security, LLM security, bias/fairness, and monitoring
- >ML threat taxonomy covering training data, model, and inference attack surfaces
- >Agentic AI threat comparison (CSA vs OWASP frameworks)
- >Presented at OWASP LASCON 2024 conference
OWASP Secure Coding Practices (Markdown)
vchirrav/owasp-secure-coding-md
A machine-readable, Markdown-optimized implementation of the OWASP Secure Coding Practices Quick Reference Guide (v2.1), extended with modern security domains. Designed specifically for AI agents (Claude Code, GitHub Copilot) and LLMs to enable token-efficient, context-aware security audits and code generation.
- >22 modular rule files covering OWASP, API Security, Cloud/K8s, CI/CD, Supply Chain, IaC, and Secrets Management
- >Each rule follows a consistent pattern: Identity, Rule, Rationale, Implementation, Verification, Examples
- >Optimized for Just-In-Time context injection into LLM workflows without exhausting token budgets
- >Seamless integration with Claude Code via CLAUDE.md persona configuration
- >Supports checklist-mode auditing with Rule ID citations (e.g., INPUT-01, DOCKER-05)
- >Covers Dockerfile security, software supply chain (SBOM, signing, provenance), and memory management
ML Research: Local LLM Fine-Tuning
vchirrav/ml_research
Hands-on research into local LLM fine-tuning using HuggingFace Transformers, PEFT/LoRA adapters, and Unsloth for efficient training. Demonstrates the full pipeline from fine-tuning to GGUF conversion and local deployment via Ollama, optimized for NVIDIA Blackwell GPUs.
- >Fine-tuning TinyLlama-1.1B with 4-bit QLoRA on cybersecurity domain data
- >LoRA adapter training with rank-16, targeting all attention and MLP projection layers
- >Full pipeline: fine-tune → merge adapters → convert to GGUF (Q8_0) → deploy with Ollama
- >Optimized for NVIDIA RTX 5060 (Blackwell sm_120) with bfloat16 precision
- >Training completed in ~24 seconds with 64.57% mean token accuracy
- >Reproducible setup using uv package manager on Windows