Skip to main content
Home/Courses/Securing Agentic AI Systems
AdvancedOffensiveLive + Lab

Securing Agentic
AI Systems

Deep-dive into autonomous AI agent architectures. Exploit tool poisoning, goal hijacking, memory injection, and multi-turn manipulation. The fastest-growing attack surface in AI — learn to break it and defend it.

Full course: 8 weeks, 3 hrs/weekend · Demo Webinar: free 4–6 hour session · Bootcamp Session: 2-day weekend (nominal fee)

Duration
8 Weeks
Format
3 hrs / weekend
Level
Advanced
Cohort Size
Max 30 students
Agentic AITool PoisoningMCP SecurityGoal Hijacking
Course Curriculum

Eight weekends. Agent-grade depth.

Progress from architecture reconnaissance to weaponized tool chains, then close the loop with defenses your blue team can operationalize.

Week 1-2

Agent architectures and attack surface mapping

  • Planner–executor loops, ReAct-style traces, and delegated sub-agents
  • Trust boundaries: user input, retrieved documents, browser/RPA bridges, internal APIs
  • Tool registries vs dynamic discovery: where schemas become attacker-controlled
  • Multi-agent orchestration: hand-offs, shared memory buses, and race conditions
  • Threat modeling worksheets for SaaS copilots vs headless agent workers
Lab Exercise

Reverse-engineer a reference agent stack, document data flows, and prioritize exploit paths ranked by blast radius and likelihood.

Week 3-4

Tool poisoning, MCP security, function call exploitation

  • Schema smuggling, ambiguous descriptions, and overlapping tool names
  • Confused deputy attacks across OAuth-scoped backends invoked by tools
  • MCP lifecycle: initialize, tools/list, resources/read, and notification abuse
  • Argument injection and polyglot payloads inside JSON tool arguments
  • Side-channel leaks via error strings, timing, and partial execution
Lab Exercise

Poison tool metadata and resource previews so the planner selects attacker-chosen tools, then escalate into an internal action without tripping naive string filters.

Week 5-6

Memory injection, goal hijacking, multi-turn manipulation

  • Vector and structured memory: write primitives through retrieved chunks
  • Session summarization attacks that rewrite long-horizon objectives
  • Goal grafting: subtle objective shifts across benign-looking turns
  • Human-in-the-loop bypass patterns and approval UI deception
  • Coordinated sequences that weaponize browser-use or shell tools safely in lab
Lab Exercise

Execute a multi-turn campaign that first reshapes memory, then steers high-privilege tool calls while preserving superficial task coherence.

Week 7-8

Defense patterns, monitoring agents, capstone engagement

  • Tool allowlists, capability manifests, and schema signing strategies
  • Runtime policy engines, structured outputs, and post-condition validators
  • Telemetry: canonical tool traces, MCP frame logging, and anomaly features
  • Synthetic monitors and shadow agents for differential behavior detection
  • Capstone: end-to-end purple team on a multi-agent lab with report-out
Lab Exercise

Ship defensive controls and a detection pack for the capstone environment, then replay adversary TTPs to measure residual risk and iterate.

What You Will Learn

Operate where agents actually fail.

  • Model autonomous agent stacks end-to-end and prioritize realistic exploit paths across tools, memory, and orchestration layers
  • Design and execute tool-poisoning and MCP-oriented attacks that survive basic prompt hygiene because they abuse structure, not spelling
  • Chain multi-turn manipulations that reshape planner objectives while evading naive single-turn guardrails
  • Implement least-privilege tool exposure, validated arguments, and scoped credentials aligned to production patterns
  • Build telemetry and monitor-agent patterns that surface anomalous tool graphs, MCP traffic, and memory mutations
  • Deliver a capstone-style engagement brief with clear findings, detection opportunities, and prioritized remediations
Prerequisites

What you need coming in.

  • Working knowledge of LLM APIs and common prompt-injection patterns
  • Ability to read and reason about TypeScript or Python backend services
  • Familiarity with REST or WebSocket integrations and OAuth-style delegated access
  • Prior hands-on security experience (appsec, red team, or detection engineering) is strongly recommended
Who Should Attend
  • Red teamers extending coverage to autonomous copilots and agent workers
  • Application security engineers responsible for MCP servers or tool gateways
  • Detection engineers building telemetry for LLM + tool-call graphs
  • Platform architects designing multi-agent orchestration and shared memory
  • Product security leads evaluating third-party agent frameworks and plugins
Reserve Your Seat

Own the agentic surface.

100+ professionals already on the waitlist. Seats are filling fast. Reserve your spot for the next cohort and get notified with schedule options for the demo webinar, bootcamp session, and full eight-week track — including how to receive your participation certificate and exam clearance certificate after you finish.