25.8.2
This website uses cookies to ensure you get the best experience on our website. Learn more

AI Red Teaming in Practice

There is so much more to red teaming AI systems than prompt injection.

In this training, attendees will learn how to red team AI systems leveraging three pillars: traditional software vulnerabilities in AI systems, AI-specific vulnerabilities, and Responsible AI (RAI) vulnerabilities. By the end of the class, attendees should be able to probe comfortably any machine learning system for OWASP Top 10 LLM vulnerabilities. We will exclusively use open- source tools and frameworks such as Semantic Kernel, LangChain, NeMo Guardrails, Counterfit and the MITRE ATLAS to red team AI systems.

The course is taught by Microsoft's AI Red Team, which was the first to combine RAI Red Teaming alongside security red teaming. In the last year, every high-risk AI system—including models and Copilots—was assessed by this team. We will use this real-world experience to upskill Black Hat attendees.

Skills / Knowledge

  • AI, ML, & Data Science
  • AppSec