Skip to main content
Home/Courses/AI Red Teaming Fundamentals
IntermediateLive + Lab⚡ Available Now

AI Red Teaming
Fundamentals

Master offensive AI security testing. Execute prompt injection attacks, extract model capabilities, jailbreak safety-aligned models, and deliver professional red team reports — all against real vulnerable AI applications.

Start with a free 4-6 hour demo webinar. Go deeper with a 2-day weekend bootcamp session. Commit to the full 8-week programme for on-demand labs, capstone projects, and certificates.

Full Course
8 Weeks
Format
3 hrs/weekend + Labs
Level
Intermediate
Starts With
Free Demo Webinar
Reserve Your Seat
Course Curriculum

Eight weeks. Zero filler.

Each module builds directly on the last. By the end, you will have executed real attacks against AI systems — not read about them. On-demand labs available throughout.

Week 1-2

Understanding the AI Attack Surface

  • The LLM threat landscape: why traditional security fails
  • OWASP LLM Top 10 — a practitioner's breakdown
  • Anatomy of a language model: inputs, outputs, and attack windows
  • Reconnaissance techniques for AI-powered systems
  • Mapping trust boundaries in LLM deployments
Lab Exercise

Identify and document attack vectors in a live AI assistant deployment.

Week 3-4

Prompt Injection & Jailbreaking

  • Direct prompt injection: hijacking LLM instructions
  • Indirect injection via documents, emails, and web content
  • Multi-turn manipulation and context poisoning
  • Jailbreaking aligned models: techniques and mitigations
  • Encoding tricks, role-play exploits, and many-shot attacks
Lab Exercise

Execute a full prompt injection attack chain against a production-style chatbot to extract the system prompt and bypass content filters.

Week 5-6

Model Probing & Extraction

  • Black-box boundary mapping and capability enumeration
  • System prompt extraction techniques
  • Training data inference and membership attacks
  • Model fingerprinting through query analysis
  • Reconstructing model behavior from API responses
Lab Exercise

Systematically probe a black-box LLM API to reconstruct its system prompt and map its capability boundaries through structured query crafting.

Week 7-8

Red Team Reporting & Defense

  • Structuring AI vulnerability reports for engineering teams
  • CVSS adaptation for AI/LLM threat severity scoring
  • Remediation strategies: prompt hardening, input validation, guardrails
  • Designing defense-in-depth for LLM deployments
  • Capstone: mini red team engagement against a multi-feature AI system
Lab Exercise

Complete a full-cycle red team exercise: discover, exploit, document, and present findings on a vulnerable AI application in under 30 minutes.

What You Will Learn

Walk out with real skills.

  • Execute the full prompt injection attack lifecycle against real AI systems
  • Extract hidden system prompts and map model capabilities through black-box probing
  • Identify and exploit jailbreaking vulnerabilities in safety-aligned models
  • Document AI security findings in clear, actionable reports
  • Apply a red teamer's mindset to any LLM-powered deployment
  • Build a foundation for advanced AI red teaming skills and the full 8-week course
Prerequisites

What you need coming in.

  • Basic familiarity with how APIs and web applications work
  • Comfort using a terminal and command-line tools
  • No prior AI or ML knowledge required
  • Security background helpful but not mandatory
Who Should Attend
  • Security engineers moving into AI/LLM security
  • Red teamers and pentesters expanding into AI
  • Security architects evaluating AI deployments
  • Developers building or integrating LLM-powered features
  • SOC analysts facing AI-related incidents
Reserve Your Seat

Start attacking AI systems.

100+ professionals already on the waitlist. Seats are filling fast.

Every course starts with a free demo webinar — no commitment required. Join the waitlist to reserve your seat.