talk-data.com talk-data.com

Event

Microsoft Ignite 2025

2025-11-17 – 2025-11-21 Microsoft Ignite Visit website ↗

Activities tracked

34

Filtering by: LLM ×

Sessions & talks

Showing 1–25 of 34 · Newest first

Search within this event →

Build A2A and MCP Systems using SWE Agents and agent-framework

2025-11-21
talk
Govind Kamtamneni (Microsoft) , Mark Wallace (Microsoft)

Learn to leverage agent-framework, the new unified platform from Semantic Kernel and AutoGen engineering teams, to build A2A compatible agents similar to magnetic-one. Use SWE Agents (GitHub Copilot coding agent and Codex with Azure OpenAI models) to accelerate development. Implement MCP tools for secure enterprise agentic workflows. Experience hands-on building, deploying, and orchestrating multi-agent systems with pre-release capabilities. Note: Contains embargoed content.

Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.

Fast and flexible inference on open-source AI models at scale

Fast and flexible inference on open-source AI models at scale

2025-11-21 Watch
breakout
Mehrdad Abdolghafari (Royal Bank of Canada(RBC)) , Cary Chai (Microsoft) , Sachi Desai (Microsoft)

Run open-source AI models of your choice with flexibility—from local environments to cloud deployments using Azure Container Apps and serverless GPUs for fast, cost-efficient inferencing. You will also learn how AKS powers scalable, high-performance LLM operations with fine-tuned control, giving you confidence to deploy your models your way. You’ll leave with a clear path to run custom and OSS models with agility and cost clarity.

Pushing limits of supercomputing innovation on Azure AI Infra

Pushing limits of supercomputing innovation on Azure AI Infra

2025-11-21 Watch
breakout
Nitin Nagarkatte (Microsoft) , Hugo Affaticati (Microsoft)

Training efficiency starts with precision. This session explores Azure supercomputing validation—from GPU kernels to LLAMA pretraining and large-scale model training. The process detects bottlenecks early, reduces cost, and boosts performance. Customers gain predictable throughput, faster training, and confidence in Azure’s readiness for multi-billion parameter models. Attendees will gain practical insights and engage directly with the engineers driving these innovations.

Running AI on Azure Storage: Fast, secure, and scalable

Running AI on Azure Storage: Fast, secure, and scalable

2025-11-20 Watch
breakout
Saurabh Sensharma (Microsoft) , Vamshi Kommineni (Microsoft) , Natalie Mao (Rakuten Group)

AI workloads require a fast and secure data infrastructure that works seamlessly. Learn how Azure Blob storage scales for OpenAI, how Azure Container Storage and Blobfuse2 ensure GPUs never sit idle, how to simplify integration with Ray/KAITO for AI apps on AKS, and how Blob storage integrates with AI services and frameworks to securely convert your enterprise data to AI-ready data. You’ll leave with best practices to drive performance, security, and developer velocity with Azure Storage.

Turn data into wisdom to unlock AI’s potential

2025-11-20
theater
Nick Burling (Nasuni)

AI success starts with the right data foundation. In this session, you will see how the Nasuni File Data Platform consolidates silos, boosts resilience, and protects at scale with immutable snapshots and fast ransomware recovery. You will hear how enterprises power Microsoft Copilot, Graph, and Azure OpenAI with governed, high-quality file data. At the end of this session you will walk away with an actionable blueprint to cut tech debt, reduce risk, and advance toward frontier-firm performance.

AI fine-tuning in Microsoft Foundry to make your agents unstoppable

AI fine-tuning in Microsoft Foundry to make your agents unstoppable

2025-11-20 Watch
breakout
Ramachandra Kota (Docusign) , Omkar More (Microsoft) , Alicia Frame (Microsoft)

Fine-tuning is your key to building agents that actually work. This demo-driven session showcases the latest in Microsoft Foundry, including Azure OpenAI and OSS model customization, and how to turn models into agents that are accurate, consistent, and production-ready. Through real-world scenarios, you’ll learn when fine-tuning makes a difference and how to apply the right technique for tool calling, data extraction, and workflow execution so your agents don’t just respond, they perform.

Build A2A and MCP Systems using SWE Agents and agent-framework

2025-11-20
talk
Govind Kamtamneni (Microsoft) , Mark Wallace (Microsoft)

Learn to leverage agent-framework, the new unified platform from Semantic Kernel and AutoGen engineering teams, to build A2A compatible agents similar to magnetic-one. Use SWE Agents (GitHub Copilot coding agent and Codex with Azure OpenAI models) to accelerate development. Implement MCP tools for secure enterprise agentic workflows. Experience hands-on building, deploying, and orchestrating multi-agent systems with pre-release capabilities. Note: Contains embargoed content.

Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.

Introducing Microsoft Foundry Tools

Introducing Microsoft Foundry Tools

2025-11-20 Watch
breakout
Xiaoying Guo (Microsoft) , Vinod Kurpad (Microsoft) , Linda Li (Microsoft)

AI agents need tools to take actions and complete their workflow; tools that can parse documents, transcribe call recordings, do custom translation--all with LLMs wrapped within them. In this session, we are introducing a new suite of production-ready tools in Microsoft Foundry, designed to seamlessly plug into your agentic AI apps, either using APIs or as MCP servers.

Inference at record speed with Azure ND Virtual Machines

Inference at record speed with Azure ND Virtual Machines

2025-11-20 Watch
breakout
Nitin Nagarkatte (Microsoft) , Hugo Affaticati (Microsoft)

Azure sets new inference records with 865K and 1.1M tokens/sec on ND GB200/GB300 v6 VMs. These results stem from deep stack optimization—from GPU kernels like GEMM and attention to multi-node scaling. Using LLAMA benchmarks, we’ll show how model architecture and hardware codesign drive throughput and efficiency. Customers benefit from faster time-to-value, lower cost per token, and production-ready infrastructure. Attendees can connect with Azure engineers to discuss best practices.

Build A2A and MCP Systems using SWE Agents and agent-framework

2025-11-19
talk
Govind Kamtamneni (Microsoft) , Mark Wallace (Microsoft)

Learn to leverage agent-framework, the new unified platform from Semantic Kernel and AutoGen engineering teams, to build A2A compatible agents similar to magnetic-one. Use SWE Agents (GitHub Copilot coding agent and Codex with Azure OpenAI models) to accelerate development. Implement MCP tools for secure enterprise agentic workflows. Experience hands-on building, deploying, and orchestrating multi-agent systems with pre-release capabilities. Note: Contains embargoed content.

Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.

Understanding Model Context Protocol (MCP)

2025-11-19
theater
Roelant Dieben (Sopra Steria)
LLM

Over the past few years, we’ve explored using large language models with external data and tools, facing many challenges. The Model Context Protocol (MCP) addresses these by standardizing how data and tools connect. In this session, we’ll demystify MCP, its purpose and architecture, and show how it enables precise tuning of models, contextual reuse, and safe delegation. While designed for developers and leads, it will help anyone assess if MCP fits their LLM projects.

AI builder’s guide to agent development in Foundry Agent Service

AI builder’s guide to agent development in Foundry Agent Service

2025-11-19 Watch
breakout
Dan Taylor (Microsoft) , Travis Wilson (Microsoft) , Salman Quazi (Microsoft)

Build, operate, and scale AI agents with Foundry Agent Service. Learn how to author agents, connect tools and data, evaluate performance, and deploy to a secure runtime for production. See how to bring OpenAI API–based projects into Foundry with minimal changes while gaining enterprise-grade governance, observability, and interoperability through the Model Context Protocol and agent-to-agent capabilities.

Interactive Session: Benchmarking LLM performance on Azure with NVIDIA Recipes

2025-11-19
breakout
Hannah Coutand (NVIDIA DGX Cloud) , Jer-Ming Chia (Microsoft)

AI performance extends beyond chip metrics; it relies on integrated hardware, software, and infrastructure. Traditional benchmarks fall short, so NVIDIA DGX Cloud Benchmarking offers a standardized framework to evaluate large-scale AI workloads. NVIDIA and Azure present an end-to-end benchmarking workflow, sharing optimization strategies for deploying and tuning production-ready LLMs on Azure.

Safeguarding against financial crimes in the new era of agentic AI

2025-11-19
theater
Matthew Durdella (BioCatch)

Agentic AI is swiftly transforming opportunities and risks in financial services. As banks use AI for secure experiences, criminals exploit these same technologies to create sophisticated scams and expand mule networks. OpenAI’s research underscores the urgency of these challenges. In this keynote, BioCatch will show how behavioral biometrics and fraud analytics, powered by Microsoft Cloud, help banks disrupt scams, dismantle mule networks, and rebuild digital trust worldwide.

Building Partner Solutions with Microsoft Foundry

2025-11-19
theater
Daniel Epstein (Microsoft)

Learn how partners can build scalable, secure AI solutions with Microsoft Foundry. Integrate models from OpenAI, Cohere, Mistral, Hugging Face, and Meta Llama using Azure Databricks, Cosmos DB, Snowflake, and SQL. Foundry enables orchestration of agents, model customization, and secure data workflows—all within environments like GitHub, Visual Studio, and Copilot Studio.

Partner: Build Grounded AI Agents with Microsoft Foundry & Onelake

2025-11-19
theater
Ilias Jennane (Microsoft)

Build standout AI products fast with Microsoft Foundry—LLMs and Agents. Learn patterns to ship apps grounded on enterprise data via OneLake and connected platforms (Fabric, Snowflake, CosmosDB, SQL, etc.). We’ll cover retrieval, tool-use, guardrails, and evaluation—plus a lean dev loop that turns experiments into production while meeting responsible AI standards.

AI powered automation & multi-agent orchestration in Microsoft Foundry

AI powered automation & multi-agent orchestration in Microsoft Foundry

2025-11-19 Watch
breakout
Christof Gebhart (BMW Group) , Tina Manghnani (Microsoft) , Mark Wallace (Microsoft) , Shawn Henry (Microsoft)

Build multi?agent systems the right way with Microsoft Foundry. Go from single?agent prototypes to fleet?level orchestration using the Foundry Agent Framework (Semantic Kernel + AutoGen), shared state, Human in the loop, OpenTel, MCP toolchains, A2A, and the Activity Protocol. Bring frameworks like LangGraph and OpenAI Agents SDK, then deploy as containerized, governed, observable agents on Foundry.

Delivered in a silent stage breakout.

Interactive Session: Serving LLMs on GPU systems at scale with NVIDIA Dynamo

2025-11-19
breakout
Anish Maddipoti (NVIDIA)

As LLMs grow, efficient inference requires multi-node execution—introducing challenges in orchestration, scheduling, and low-latency GPU-to-GPU data transfers. Hardware like the GB200 NVL72 delivers massive scale-up compute, but truly scalable inference also depends on advanced software. Explore how open-source frameworks like NVIDIA Dynamo, combined with Azure’s AKS managed Kubernetes service, unlock new levels of performance and cost-efficiency.

Achieving smarter claims management with AI-optimized subrogation

2025-11-18
theater
Daniel Wagenknecht (KPMG AG) , Robert Badawy (Microsoft) , Martin Merck (KPMG AG)

KPMG’s AI-driven platform transforms insurance claims management using Microsoft Azure and OpenAI. Informed by live client use cases, the solution analyzes large datasets, identifies high-value opportunities, and generates actionable insights. The solution improves operational efficiency, accelerates decision-making, and helps insurers unlock hidden value across complex claims portfolios.

Make backups an assets activating Azure data for AI and analytics

2025-11-18
theater
Liore Shai (Eon)

The explosive growth of cloud data—and its importance for analytics and AI—demands a new approach to protection and access. Traditional backup tools weren’t built to handle hyperscale workloads, such as Azure Blob Storage and Cosmos DB, resulting in costly silos. Discover how a cloud-native platform delivers hyperscale protection, automates operations, reduces TCO, and turns backups into a live, queryable data lake for analytics in Azure Synapse, Microsoft Fabric, and Azure OpenAI.

Azure AI Infra updates to power frontier and enterprise workloads

Azure AI Infra updates to power frontier and enterprise workloads

2025-11-18 Watch
breakout
Matt Vegas (Microsoft) , Param Shah (Microsoft)

As AI workloads grow, infrastructure must keep pace. This session covers Azure’s silicon-to-systems optimization, hardware-software codesign, and datacenter advances in cooling, power, network, and security. Learn about Azure’s latest AI infrastructure powered by NVIDIA Grace Blackwell Superchips and Quantum-2 InfiniBand, including ND GB200/GB300 VMs with exascale performance and 860K+ tokens/sec on LLAMA 70B. We’ll also cover NC H100 and NC RTX Blackwell VMs for enterprise inferencing.

Make smarter model choices: Anthropic, OpenAI & more on Microsoft Foundry

Make smarter model choices: Anthropic, OpenAI & more on Microsoft Foundry

2025-11-18 Watch
breakout
Steve Sweetman (Microsoft) , Naomi Moneypenny (Microsoft) , Keiji Kanazawa (Microsoft) , Tao Zhang (Manus) , Bill Flannery (Wolters Kluwer)

With models from OpenAI, Anthropic, Cohere, Black Forest Labs, Mistral AI, Meta, xAI and more, Microsoft Foundry brings the world’s most advanced AI models together in one secure, unified platform. Learn how to select, evaluate, and intelligently route models to achieve the right balance of cost, latency, and accuracy—all while maintaining enterprise-grade security, responsible AI practices, and built-in governance across every workload.

Secure and Protect AI Usage in your Organization with DSPM for AI

2025-11-18
theater
Tushar Kumar (Codec Ireland)

As organizations adopt AI to boost productivity and creativity, concerns grow about data security and possible leaks through generative AI tools like Copilot, ChatGPT, and Gemini. Microsoft’s Data Security Posture Management (DSPM) for AI helps organizations monitor AI activities, enforce data protection policies, and meet regulatory standards, allowing safe, productive AI use without compromising sensitive information.

Build A2A and MCP Systems using SWE Agents and agent-framework

2025-11-18
talk
Govind Kamtamneni (Microsoft) , Mark Wallace (Microsoft)

Learn to leverage agent-framework, the new unified platform from Semantic Kernel and AutoGen engineering teams, to build A2A compatible agents similar to magnetic-one. Use SWE Agents (GitHub Copilot coding agent and Codex with Azure OpenAI models) to accelerate development. Implement MCP tools for secure enterprise agentic workflows. Experience hands-on building, deploying, and orchestrating multi-agent systems with pre-release capabilities. Note: Contains embargoed content.

Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.

Accelerate developer productivity with AI-native documentation

Accelerate developer productivity with AI-native documentation

talk
Ethan Palm (Mintlify)

AI has changed how developers find answers - your docs are now context for both humans and LLMs.

Learn how to make your documentation AI-native with Mintlify.