Multi-Agent AI Systems
Building A2A and MCP systems with SWE agents: The enterprise multi-agent blueprint
Advanced hands-on lab demonstrating how to build production-grade multi-agent systems using Microsoft Agent Framework, Agent-to-Agent (A2A) protocol, and Software Engineering (SWE) agents. Starting with Magentic-One architecture patterns and progressively adding MCP tools and GitHub Copilot coding agents for secure enterprise workflows.
This 300-level lab provides advanced developers with practical experience building interoperable agent systems that can collaborate across platforms, execute complex engineering tasks autonomously, and integrate with external tools securely.
Session context
LAB513 - Build A2A and MCP Systems using SWE Agents and agent-framework
Speakers:
- Govind Kamtamneni (Microsoft)
- Mark Wallace (Microsoft)
When: November 18, 2025, 1:00 PM - 2:15 PM PST Where: Moscone West, Level 3, Room 3007 Level: Advanced (300) Format: Hands-on lab (in-person only, embargoed content)
Session description:
"Learn to leverage agent-framework, the new unified platform from Semantic Kernel and AutoGen engineering teams, to build A2A compatible agents similar to magnetic-one. Use SWE Agents (GitHub Copilot coding agent and Codex with Azure OpenAI models) to accelerate development. Implement MCP tools for secure enterprise agentic workflows."
Key learning objectives:
- Build A2A (Agent-to-Agent) compatible agents using Magentic-One patterns
- Implement MCP (Model Context Protocol) tools for secure workflows
- Deploy SWE agents for autonomous code development
- Orchestrate multi-agent systems in production
- Create interoperable agents across platforms and clouds
Lab resources:
Core technologies explained
Magentic-One: Multi-agent orchestration pattern
What it is:
Magentic-One is Microsoft's generalist multi-agent system designed for solving open-ended web and file-based tasks across domains.
Architecture:
Orchestrator agent (lead):
- Directs four specialized agents
- Plans task execution
- Tracks progress
- Recovers from errors
Specialized agents:
- WebSurfer: Web navigation and interaction
- FileSurfer: File system operations and document processing
- Coder: Code generation and execution
- ComputerTerminal: Command-line operations
Built on AutoGen:
Magentic-One leverages AutoGen's multi-agent framework, providing production-tested patterns for agent coordination.
Why this matters:
Instead of building one general agent attempting everything, Magentic-One demonstrates specialized agents coordinated by orchestrator. This pattern scales better and handles complex tasks more reliably.
Agent-to-Agent (A2A) protocol
What it solves:
Agents from different platforms, vendors, and organizations need standardized communication protocol.
A2A capabilities:
Structured communication:
- Exchange goals between agents
- Manage shared state
- Invoke actions across agent boundaries
- Return results securely
Interoperability:
- Agents built with Semantic Kernel can communicate with LangChain agents
- Cross-cloud agent collaboration
- Cross-organizational agent coordination
Observability:
- Track agent-to-agent interactions
- Monitor multi-agent workflows
- Debug distributed agent systems
Platform support:
Coming to Azure AI Foundry and Copilot Studio, enabling enterprise adoption of A2A patterns.
Technical insight:
Without A2A: Each agent pair requires custom integration code, creating N² integration complexity.
With A2A: Standardized protocol enables any A2A-compatible agent to communicate with any other, reducing to N integrations.
Model Context Protocol (MCP)
What it provides:
Standardized interface for connecting agents to external tools, data sources, and services.
MCP architecture:
MCP Server:
- Exposes tools and data sources
- Handles authentication and authorization
- Manages request/response cycles
MCP Client (agent):
- Discovers available tools
- Invokes tools with parameters
- Receives and processes results
Enterprise advantages:
Security:
- Centralized authentication
- Tool access controls
- Audit logging of tool usage
Reusability:
- One MCP server serves multiple agents
- Standard tools across agent fleet
- Vendor-agnostic integration
Example MCP tools:
- Microsoft Learn documentation access
- Internal knowledge bases
- External APIs and services
- Data retrieval systems
SWE Agents (Software Engineering Agents)
What they are:
AI-driven systems that assist or act autonomously on behalf of software engineers.
GitHub Copilot Coding Agent:
Capabilities:
- Runs inside GitHub Actions
- Picks up assigned issues
- Explores repository for context
- Writes code autonomously
- Passes tests
- Opens pull requests for review
Agent Mode:
Self-healing code:
- Iterates on its own code
- Recognizes errors automatically
- Fixes errors without human intervention
- Analyzes runtime errors
Availability:
- Preview for Copilot Enterprise and Copilot Pro+ users
- Integrated in VS Code, Xcode, Eclipse, JetBrains, Visual Studio
Azure OpenAI Codex:
What it enables:
- Code generation from natural language
- Code explanation and documentation
- Code translation between languages
- Code optimization suggestions
Agentic DevOps pattern:
SWE agents represent evolution of DevOps where intelligent agents collaborate with developers and with each other on code development, testing, and deployment.
Lab architecture
What you build
Multi-agent system with:
- Magentic-One orchestration pattern - Coordinator agent directing specialized agents
- A2A communication - Agents communicating via standardized protocol
- MCP tool integration - Agents accessing external tools securely
- SWE agent deployment - Autonomous code development agent
- Production workflow - End-to-end agent system deployable to enterprise
Progressive lab phases
Phase 1: Magentic-One foundation
Build orchestrator agent:
- Task planning logic
- Agent coordination
- Progress tracking
- Error recovery
Build specialized agents:
- WebSurfer for web interaction
- FileSurfer for document processing
- Coder for code generation
- ComputerTerminal for system operations
Skills learned:
- Multi-agent architecture patterns
- Orchestrator design patterns
- Specialized agent creation
- Inter-agent coordination
Phase 2: A2A protocol implementation
Enable agent-to-agent communication:
- Define agent capabilities
- Implement A2A message protocol
- Handle cross-agent requests
- Manage distributed state
Skills learned:
- A2A protocol specification
- Interoperable agent design
- Cross-platform agent communication
- State management in distributed systems
Technical challenge:
How do agents discover each other's capabilities? How do they negotiate task execution? A2A protocol handles capability discovery and task delegation systematically.
Phase 3: MCP tool integration
Connect agents to external tools:
- Configure MCP server
- Register tool definitions
- Implement tool invocation
- Handle tool responses
Example tools:
- Microsoft Learn MCP server for documentation
- Custom enterprise knowledge bases
- External APIs via MCP
- Data retrieval services
Skills learned:
- MCP server configuration
- Tool schema definition
- Secure tool access patterns
- Tool response handling
Phase 4: SWE agent deployment
Deploy GitHub Copilot coding agent:
- Configure agent for repository
- Define code generation tasks
- Set up testing requirements
- Configure PR creation workflow
Autonomous code development:
- Agent receives issue assignment
- Explores codebase for context
- Generates solution code
- Runs tests to validate
- Creates pull request
Skills learned:
- SWE agent configuration
- Repository context management
- Autonomous testing patterns
- PR automation workflows
Phase 5: Production orchestration
End-to-end workflow:
- User submits complex task
- Orchestrator analyzes requirements
- Delegates to specialized agents via A2A
- Agents use MCP tools for external data
- SWE agent generates code if needed
- Orchestrator assembles results
- Returns comprehensive solution
Production considerations:
- Error handling across agent boundaries
- Monitoring multi-agent workflows
- Cost tracking for LLM calls
- Security and compliance controls
Advanced patterns demonstrated
Cross-platform agent collaboration
Scenario:
Agent A (built with Semantic Kernel) needs data from Agent B (built with LangChain) to complete task.
Without A2A:
- Custom integration code required
- Tight coupling between agents
- Fragile when either agent updates
With A2A:
- Agent A sends A2A request to Agent B
- Agent B processes via standard protocol
- Agent A receives response in standard format
- No custom integration code needed
Enterprise implication:
Organizations can adopt best-of-breed agents from multiple vendors without integration nightmares. A2A enables heterogeneous agent ecosystems.
Secure tool access via MCP
Security challenge:
Agents need access to enterprise tools and data, but:
- Different agents require different permissions
- Tool access must be audited
- Credentials cannot be embedded in agent code
MCP solution:
Centralized authentication:
- MCP server handles authentication
- Agents present credentials to MCP server
- MCP server validates and proxies tool access
Fine-grained authorization:
- Define which agents can access which tools
- Role-based access control
- Tool usage policies enforced at MCP layer
Audit trail:
- All tool invocations logged
- Agent identity tracked
- Tool usage patterns monitored
Pattern:
Agents never call tools directly. All tool access mediated through MCP server enforcing security policies.
Autonomous code development workflow
Traditional development:
- Developer reads issue
- Developer explores codebase
- Developer writes code
- Developer tests code
- Developer creates PR
- Code review happens
SWE agent workflow:
- SWE agent assigned issue
- Agent explores codebase autonomously
- Agent writes code
- Agent runs tests automatically
- Agent creates PR
- Human reviews PR (critical step)
What changes:
Agent handles mechanical work (code exploration, generation, testing). Human reviews for correctness, architecture alignment, and business logic validation.
Production reality:
SWE agents accelerate development but don't replace code review. Human-in-the-loop remains essential for quality assurance.
What this lab reveals
Pre-release capabilities
Lab contains embargoed content showing Microsoft Agent Framework features not yet publicly documented.
What attendees see first:
- A2A protocol implementation details
- MCP integration patterns for enterprise
- Magentic-One orchestration in production
- SWE agent deployment patterns
Strategic insight:
Microsoft betting heavily on multi-agent systems as fundamental enterprise pattern. Investment in standardized protocols (A2A, MCP) signals long-term commitment beyond single-vendor lock-in.
Production-ready patterns
Unlike simpler labs, LAB513 demonstrates complete production workflow:
Not just agent creation - Full orchestration including error recovery, monitoring, and security
Not just demos - Deployable systems with RBAC, audit logging, and compliance controls
Not just theory - Actual code running in GitHub Actions, processing real repository issues
Enterprise validation:
Patterns demonstrated here work at scale. Magentic-One architecture proven in Microsoft Research. A2A protocol designed for cross-organizational agent collaboration. MCP security model enterprise-grade.
Critical assessment
What works exceptionally well
Magentic-One orchestration pattern:
Specialized agents coordinated by orchestrator scales better than monolithic general agent. Clear separation of concerns enables:
- Independent agent development
- Focused optimization per agent type
- Graceful degradation when agent fails
Official Microsoft Learn definition: Magentic orchestration is "designed for open-ended and complex problems that don't have a predetermined plan." Manager agent dynamically:
- Maintains shared context across specialist agents
- Tracks workflow progress
- Adapts workflow in real-time based on task evolution
- Iteratively refines solutions through agent collaboration
Production deployment: Per Microsoft, these patterns (originally AutoGen research prototypes) now operate with "production-grade durability and enterprise controls" in Agent Framework. Deploy to Azure AI Foundry with built-in observability, approvals, security, and long-running durability.
A2A protocol standardization:
Solves real interoperability problem. Enterprises building agents don't want vendor lock-in. A2A enables heterogeneous agent ecosystems where best tools win, not single vendor.
Official status: Microsoft announced May 7, 2025 adoption of Google's Agent2Agent (A2A) protocol. Coming to Azure AI Foundry and Copilot Studio. Semantic Kernel Python samples available demonstrating cross-agent collaboration.
Key A2A capabilities (per spec at a2aprotocol.ai):
- Agent Discovery: Machine-readable "Agent Card" (JSON) advertising capabilities, endpoints, auth requirements
- Task Management: Structured interactions around discrete tasks with well-defined lifecycles
- Message Exchange: Standardized messaging for context, replies, artifacts
- Content Negotiation: Format and UI capability negotiation between agents
MCP security model:
Centralized tool access control addresses legitimate enterprise concern: how do we govern what agents can do? MCP provides answer: policy enforcement at tool gateway.
SWE agents for code velocity:
GitHub Copilot coding agent demonstrably accelerates development for mechanical tasks. Issue → PR workflow automation is real productivity gain for well-scoped tasks.
What remains challenging
Orchestration complexity at scale:
Lab demonstrates 4-5 agents. Production systems might coordinate dozens or hundreds. How does orchestrator scale? What happens when agents conflict? How do you debug distributed agent failures?
A2A adoption timeline:
Protocol coming "soon" to Azure AI Foundry and Copilot Studio. Until broadly available, enterprises can't build production systems depending on A2A. Chicken-and-egg adoption problem.
MCP tool proliferation:
Who builds MCP servers for enterprise tools? Microsoft provides some, but enterprises have hundreds of internal tools. Building and maintaining MCP servers becomes operational burden.
SWE agent scope limitations:
Coding agent works for well-defined issues. Complex architectural changes requiring judgment across codebase still need human developers. Risk: Organizations overestimate agent capabilities and assign inappropriate tasks.
Cost modeling uncertainty:
Multi-agent systems make many LLM calls. Orchestrator planning, specialized agent execution, A2A communication, MCP tool usage - costs accumulate quickly. Lab doesn't address cost optimization strategies.
Microsoft documentation addresses this:
Agent Framework includes monitoring dashboards with:
- Token consumption over time visualization
- Cost estimation and daily spending
- Per-agent breakdown to identify resource-heavy agents
- Optimized context management to reduce AI costs
Production best practices:
- Implement token usage monitoring from day one
- Optimize context windows (only include relevant agents)
- Use simplest orchestration pattern for task
- Track cost per successful interaction, not just total cost
Production considerations not covered
Multi-tenant agent systems
Challenge:
How do you deploy agent orchestration serving multiple customers with:
- Data isolation between tenants
- Resource limits per tenant
- Cost allocation per tenant
- Security boundaries enforced
Lab scope:
Single-tenant development scenario.
Production gap:
Enterprise SaaS providers need multi-tenant agent systems. Architectural patterns unclear.
Agent versioning and rollback
Challenge:
Agent behavior changes when:
- Underlying model updates
- Tool definitions change
- Orchestration logic modified
- A2A protocol evolves
How do you version agents? How do you rollback when new version misbehaves?
Lab scope:
Single version, no rollback demonstrated.
Production gap:
Enterprises need agent lifecycle management comparable to software release management.
Performance SLAs
Challenge:
Multi-agent orchestration involves:
- Orchestrator planning time
- Agent execution time
- A2A communication overhead
- MCP tool latency
- LLM inference time
How do you guarantee response time SLAs when so many variables?
Lab scope:
No performance testing or SLA discussion.
Production gap:
User-facing agent systems require predictable latency. Multi-agent orchestration makes this harder to guarantee.
Microsoft Learn guidance:
Per AI Agent Design Patterns, production deployments should:
- Implement observability and performance monitoring
- Design for proper error handling
- Manage context windows carefully to reduce token usage
- Use simplest orchestration pattern that solves the problem (avoid unnecessary complexity)
Compliance and auditability
Challenge:
Regulated industries require:
- Complete audit trail of agent decisions
- Explainability of agent reasoning
- Compliance with data residency rules
- Human oversight of critical operations
Lab scope:
MCP provides audit logging, but comprehensive compliance story incomplete.
Production gap:
Financial services, healthcare, government need compliance validation before agent deployment.
Who should take this lab
Advanced developers building production agent systems:
If you're architecting multi-agent solutions for enterprise, this lab provides concrete patterns you can implement immediately.
Platform engineers evaluating agent orchestration:
Hands-on experience with Magentic-One, A2A, and MCP informs build-vs-buy decisions for agent platforms.
SRE teams planning agent operations:
Understanding agent-to-agent communication, tool access patterns, and orchestration failures helps plan operational support.
Not ideal for:
- Beginners - 300-level assumes familiarity with agent concepts, LLM fundamentals, and distributed systems
- Non-technical roles - Hands-on coding required throughout
- Simple agent use cases - If you need single agent with basic tools, simpler labs more appropriate
What to explore after the lab
Scale orchestration complexity
Add more specialized agents:
- Database query agent
- API integration agent
- Data visualization agent
- Notification agent
Test orchestrator at scale:
- How many concurrent agent tasks?
- What happens when agents conflict?
- How does error recovery work with 10+ agents?
Build custom MCP servers
Enterprise tool integration:
- Internal knowledge base MCP server
- CRM system MCP server
- Custom API gateway MCP server
Security hardening:
- Implement fine-grained RBAC
- Add audit logging
- Enforce rate limiting
Deploy SWE agent to real repository
Production code development:
- Assign real issues to coding agent
- Measure PR quality and acceptance rate
- Optimize for specific coding patterns
Measurement:
- Track velocity improvement
- Monitor test pass rates
- Measure code review time
Implement cross-platform A2A
Heterogeneous agent system:
- Semantic Kernel agent
- LangChain agent
- Custom agent framework
- All communicating via A2A
Validate interoperability:
- Can they actually coordinate?
- What breaks at integration points?
- Where does protocol need extension?
The strategic implication
Microsoft's multi-agent bet
This lab reveals Microsoft's strategic direction: Enterprise future runs on orchestrated multi-agent systems, not monolithic AI assistants.
Evidence:
Agent Framework unification - Merging Semantic Kernel and AutoGen signals long-term platform commitment
A2A protocol standardization - Open protocol prevents vendor lock-in, encouraging ecosystem adoption
MCP integration - Solving enterprise tool access problem systematically, not one-off integrations
Magentic-One patterns - Research-to-production pipeline demonstrating Microsoft validating multi-agent architectures internally
SWE agent investment - GitHub Copilot evolution toward autonomous agents shows Microsoft betting on agentic DevOps
What enterprises should watch
A2A adoption velocity - If Azure AI Foundry and Copilot Studio ship A2A support quickly, validates strategic importance
Magentic-One expansion - Watch for additional specialized agent types and orchestration patterns
MCP server ecosystem - Third-party MCP servers for enterprise tools indicate market validation
SWE agent capabilities - GitHub Copilot coding agent evolution shows maturity of autonomous development
Multi-agent pricing models - How Microsoft prices orchestrated agent systems reveals economic viability
Learn more
Lab repository:
- LAB513 GitHub Repository - Lab instructions and code samples
- Spec-to-Agents Reference - Detailed implementation patterns
Official resources:
- Magentic-One Research
- Agent2Agent (A2A) Protocol
- GitHub Copilot Coding Agent
- Microsoft Agent Framework
- Microsoft Foundry Community Discord
Technologies demonstrated:
- Microsoft Agent Framework (.NET and Python)
- Magentic-One orchestration pattern
- Agent-to-Agent (A2A) protocol
- Model Context Protocol (MCP)
- GitHub Copilot SWE agents
- Azure OpenAI Codex
Related Ignite sessions:
- AI Fleet Operations (Foundry)
- Building Multi-Agent Systems with Azure AI Foundry
- Multi-Agent Apps with MCP
- Pizza Ordering Agent Lab