Skip to main content
IntermediateDefenseLive + Lab
Threat DetectionMonitoringIncident ResponseAI Forensics

AI SOC
Operations

Build and operate AI-specific threat monitoring pipelines. Detect model abuse, inference attacks, data poisoning attempts, and prompt injection campaigns in real time. Staff your SOC for the AI era with repeatable detection content, SIEM-ready telemetry, and IR playbooks that hold under pressure.

Duration
8 Weeks
Format
3 hrs / weekend
Level
Intermediate
Cohort Size
Max 30 students

Start with a free 4–6 hour demo webinar, optionally continue with a 2-day weekend bootcamp session for a nominal fee, then follow the full 8-week program with live weekend sessions and on-demand labs. Earn a participation certificate on completion and an exam clearance certificate when you pass the final assessment.

Reserve Your Seat
Course Curriculum

Eight weekends. Zero filler.

Each module stacks concrete detection content, SIEM integration patterns, and IR mechanics you can adapt to your own AI services — not generic security theory re-labeled as “AI.”

Week 1–2

AI threat landscape and SOC fundamentals for AI systems

  • Mapping MITRE ATLAS techniques to observable events across inference, retrieval, and agent orchestration layers
  • Trust boundaries for multi-tenant LLM platforms: identity, quotas, policy engines, and third-party tool connectors
  • Designing a minimum viable logging schema for model I/O, token economics, and safety endpoint decisions
  • Risk registers for model abuse: credential stuffing against keys, shadow models, and unauthorized fine-tunes
  • Vendor and shared-responsibility checklists so analysts know what the cloud AI provider will never see
Lab Exercise

Produce a monitoring storyboard and data dictionary for a realistic LLM API deployment, including abuse hypotheses and required log fields.

Week 3–4

Detection engineering — building AI-specific detection rules and pipelines

  • Authoring correlation rules for indirect prompt injection, delimiter smuggling, and many-shot coercion patterns
  • SIEM onboarding patterns for OpenAI-style audit logs, gateway captures, and application telemetry with PII minimization
  • Statistical baselines for token velocity, tool-call fan-out, refusal-rate drift, and anomalous retrieval overlap
  • RAG integrity checks: hash-chained chunk updates, conflicting source detectors, and poisoned-document fingerprints
  • Detection-as-code workflows with versioned tests using replay fixtures and purple-team scoring
Lab Exercise

Implement and tune five production-style rules (Sigma-style pseudocode translated to your SIEM dialect) with measured precision/recall on provided attack samples.

Week 5–6

Incident response for AI systems — triage, containment, forensics

  • AI-specific indicators of compromise: prompt artifacts, poisoned contexts, tool-call graphs, and embedding drift windows
  • Containment ladders: per-tenant model routes, rate limits, kill switches for tools, and controlled index rollbacks
  • Forensic preservation for tokenizer settings, adapter weights, retrieval snapshots, and redacted conversation timelines
  • Playbooks for model-enabled data exfiltration, policy bypass, and autonomous agent misuse with severity rubrics
  • Coordinated response with ML engineers: repro bundles, offline evaluation hooks, and safe rollback validation
Lab Exercise

Execute a tabletop IR using synthetic SIEM exports: build a timeline, select containment actions, and draft customer-facing comms without over-promising attribution.

Week 7–8

Monitoring at scale, automation, capstone SOC exercise

  • SOAR playbooks that orchestrate API key rotation, policy pushes, and targeted model quarantines with human gates
  • Canary prompts, honey documents in corpora, and controlled shadow queries to validate detector health
  • Fleet-wide KPIs: MTTD/MTTR against OWASP LLM categories, coverage heatmaps, and backlog hygiene for AI rules
  • Automation guardrails: circuit breakers on auto-remediation, evidence retention windows, and auditability of AI actions
  • Capstone preparation: briefing leadership on residual risk after a model-layer incident
Lab Exercise

24-hour condensed SOC shift simulation with staged escalations spanning model, application, and data planes; debrief maps lessons to your own environment.

What You Will Learn

Walk out ready to run AI-aware shifts.

  • Design end-to-end telemetry and field mappings so LLM, RAG, and agent events are first-class citizens in your SIEM
  • Author, test, and maintain AI-specific detection content with measurable precision against representative attack chains
  • Triage model-layer incidents using structured playbooks, IoC families, and containment steps that respect uptime commitments
  • Preserve and analyze forensic artifacts unique to AI systems while coordinating with platform and ML engineering partners
  • Instrument operational KPIs and purple-team exercises that prove coverage for prompt injection, poisoning, and inference abuse
  • Deliver a capstone SOC narrative that ties technical findings to business risk and follow-on hardening work
Prerequisites

What you need coming in.

  • Active SOC, CSIRT, or detection engineering experience (alerts, investigations, or rule development)
  • Comfort querying logs in a SIEM or data lake (e.g., KQL, SPL, SQL, or Lucene-style languages)
  • Working knowledge of HTTP APIs, authentication flows, and basic cloud identity concepts
  • Familiarity with at least one LLM product surface (chat assistant, API gateway, or enterprise copilot) from an operations perspective
Who Should Attend
  • SOC analysts and shift leads covering new LLM or copilot workloads
  • Detection engineers modernizing content for generative AI services
  • CSIRT responders who need model-aware containment and forensics steps
  • Security architects designing telemetry budgets for AI gateways
  • AI platform owners partnering with security for shared on-call
Reserve Your Seat

Defend the model layer.

100+ professionals already on the waitlist. Seats are filling fast.

Join the waitlist for cohort dates, demo webinar access, and the optional weekend bootcamp session. Full program graduates receive a participation certificate; passing the final exam awards an exam clearance certificate.