AI Red Teaming
Fundamentals
Master offensive AI security testing. Execute prompt injection attacks, extract model capabilities, jailbreak safety-aligned models, and deliver professional red team reports — all against real vulnerable AI applications.
Start with a free 4-6 hour demo webinar. Go deeper with a 2-day weekend bootcamp session. Commit to the full 8-week programme for on-demand labs, capstone projects, and certificates.
Eight weeks. Zero filler.
Each module builds directly on the last. By the end, you will have executed real attacks against AI systems — not read about them. On-demand labs available throughout.
Understanding the AI Attack Surface
- The LLM threat landscape: why traditional security fails
- OWASP LLM Top 10 — a practitioner's breakdown
- Anatomy of a language model: inputs, outputs, and attack windows
- Reconnaissance techniques for AI-powered systems
- Mapping trust boundaries in LLM deployments
Identify and document attack vectors in a live AI assistant deployment.
Prompt Injection & Jailbreaking
- Direct prompt injection: hijacking LLM instructions
- Indirect injection via documents, emails, and web content
- Multi-turn manipulation and context poisoning
- Jailbreaking aligned models: techniques and mitigations
- Encoding tricks, role-play exploits, and many-shot attacks
Execute a full prompt injection attack chain against a production-style chatbot to extract the system prompt and bypass content filters.
Model Probing & Extraction
- Black-box boundary mapping and capability enumeration
- System prompt extraction techniques
- Training data inference and membership attacks
- Model fingerprinting through query analysis
- Reconstructing model behavior from API responses
Systematically probe a black-box LLM API to reconstruct its system prompt and map its capability boundaries through structured query crafting.
Red Team Reporting & Defense
- Structuring AI vulnerability reports for engineering teams
- CVSS adaptation for AI/LLM threat severity scoring
- Remediation strategies: prompt hardening, input validation, guardrails
- Designing defense-in-depth for LLM deployments
- Capstone: mini red team engagement against a multi-feature AI system
Complete a full-cycle red team exercise: discover, exploit, document, and present findings on a vulnerable AI application in under 30 minutes.
Walk out with real skills.
- Execute the full prompt injection attack lifecycle against real AI systems
- Extract hidden system prompts and map model capabilities through black-box probing
- Identify and exploit jailbreaking vulnerabilities in safety-aligned models
- Document AI security findings in clear, actionable reports
- Apply a red teamer's mindset to any LLM-powered deployment
- Build a foundation for advanced AI red teaming skills and the full 8-week course
What you need coming in.
- Basic familiarity with how APIs and web applications work
- Comfort using a terminal and command-line tools
- No prior AI or ML knowledge required
- Security background helpful but not mandatory
- Security engineers moving into AI/LLM security
- Red teamers and pentesters expanding into AI
- Security architects evaluating AI deployments
- Developers building or integrating LLM-powered features
- SOC analysts facing AI-related incidents
Start attacking AI systems.
100+ professionals already on the waitlist. Seats are filling fast.
Every course starts with a free demo webinar — no commitment required. Join the waitlist to reserve your seat.