Enterprise AI Governance
Automated AI Red Teaming with Azure AI Foundry
Hands-on lab notes from LAB516 Series: AI Red Teaming with Azure AI Foundry.
Session: LAB516
Date: Tuesday, Nov 18, 2025
Time: 6:45 PM PST - 8:00 PM PST
Location: Moscone West, Level 3, Room 3014
Additional Sessions:
Session: LAB516-R1
Date: Wednesday, Nov 19, 2025
Time: 2:00 PM PST - 3:15 PM PST
Location: Moscone West, Level 3, Room 3007
Session: LAB516-R2
Date: Friday, Nov 21, 2025
Time: 9:00 AM PST - 10:15 AM PST
Location: Moscone West, Level 3, Room 3014
Coming Soon
This article will be published during/after Microsoft Ignite 2025 (Nov 18-21). Full lab walkthrough of automated AI red teaming coming soon.
Lab Overview
What We're Learning:
- Fundamentals of automated AI red teaming for generative AI systems
- Identifying safety issues and security vulnerabilities using Azure AI Foundry
- Applying automated attack techniques across multiple risk dimensions
- Testing before deployment to prevent production incidents
Technologies:
- Azure AI Foundry
- Automated red teaming techniques
- Safety and security risk assessment
- Multi-dimensional vulnerability testing
Key Learning Goals
- Red Teaming Fundamentals - What is automated AI red teaming?
- Attack Techniques - What attack vectors exist for generative AI?
- Risk Dimensions - How do you assess safety vs security risks?
- Azure AI Foundry Tools - What tooling supports automated red teaming?
- Pre-Deployment Testing - How do you integrate red teaming into CI/CD?
Stay Tuned
Full lab walkthrough, attack patterns, and defensive strategies coming soon.
Sessions: LAB516 + LAB516-R1/R2 | Various times Nov 18-21 | Moscone West