Back to All Trainings

Advanced AI Red Teaming

Adversarial Testing for LLM Applications

Instructor: Marudhamaran Gunasekaran
Date: September 12, 2025
Duration: 8 Hours (9:00 AM - 6:00 PM)
Level: Intermediate
Advanced AI Red Teaming

About This Training

This intensive 8-hour deep-dive transforms you from an AI observer into an AI adversary—and ultimately, an AI defender. You'll think, act, and attack like a real threat actor, learning to exploit the same vulnerabilities that keep security teams awake at night.

Hands-on Experience

You will gain hands-on experience in:

  • Mapping AI system architectures and identify attack surfaces across the entire ML pipeline
  • Crafting sophisticated prompt injections that bypass safety guardrails
  • Planting undetectable backdoors in neural networks
  • Leveraging RAG systems and AI agents to access sensitive data

Training Format

Lectures

20-40% of the training

Hands-on Labs

60-80% of the training

Who Should Take This Course

  • Security engineers
  • Software engineers
  • Data scientists
  • ML engineers
  • Ops engineers
  • Anyone who is:
    • Willing to expand their knowledge beyond traditional pentesting into AI-specific threats
    • Wanting to understand how their creations can be weaponized

Student Requirements

A laptop with a browser (Chrome or Firefox recommended)

We have online cloud labs that require a browser capable of web socket connections.

Note: Even iPads would be sufficient for this training.