talk-data.com talk-data.com

Topic

LLM

Large Language Models (LLM)

nlp ai machine_learning

34

tagged

Activity Trend

158 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Microsoft Ignite 2025 ×

Learn to leverage agent-framework, the new unified platform from Semantic Kernel and AutoGen engineering teams, to build A2A compatible agents similar to magnetic-one. Use SWE Agents (GitHub Copilot coding agent and Codex with Azure OpenAI models) to accelerate development. Implement MCP tools for secure enterprise agentic workflows. Experience hands-on building, deploying, and orchestrating multi-agent systems with pre-release capabilities. Note: Contains embargoed content.

Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.

Fast and flexible inference on open-source AI models at scale

Run open-source AI models of your choice with flexibility—from local environments to cloud deployments using Azure Container Apps and serverless GPUs for fast, cost-efficient inferencing. You will also learn how AKS powers scalable, high-performance LLM operations with fine-tuned control, giving you confidence to deploy your models your way. You’ll leave with a clear path to run custom and OSS models with agility and cost clarity.

Pushing limits of supercomputing innovation on Azure AI Infra

Training efficiency starts with precision. This session explores Azure supercomputing validation—from GPU kernels to LLAMA pretraining and large-scale model training. The process detects bottlenecks early, reduces cost, and boosts performance. Customers gain predictable throughput, faster training, and confidence in Azure’s readiness for multi-billion parameter models. Attendees will gain practical insights and engage directly with the engineers driving these innovations.

Running AI on Azure Storage: Fast, secure, and scalable

AI workloads require a fast and secure data infrastructure that works seamlessly. Learn how Azure Blob storage scales for OpenAI, how Azure Container Storage and Blobfuse2 ensure GPUs never sit idle, how to simplify integration with Ray/KAITO for AI apps on AKS, and how Blob storage integrates with AI services and frameworks to securely convert your enterprise data to AI-ready data. You’ll leave with best practices to drive performance, security, and developer velocity with Azure Storage.

AI success starts with the right data foundation. In this session, you will see how the Nasuni File Data Platform consolidates silos, boosts resilience, and protects at scale with immutable snapshots and fast ransomware recovery. You will hear how enterprises power Microsoft Copilot, Graph, and Azure OpenAI with governed, high-quality file data. At the end of this session you will walk away with an actionable blueprint to cut tech debt, reduce risk, and advance toward frontier-firm performance.

AI fine-tuning in Microsoft Foundry to make your agents unstoppable

Fine-tuning is your key to building agents that actually work. This demo-driven session showcases the latest in Microsoft Foundry, including Azure OpenAI and OSS model customization, and how to turn models into agents that are accurate, consistent, and production-ready. Through real-world scenarios, you’ll learn when fine-tuning makes a difference and how to apply the right technique for tool calling, data extraction, and workflow execution so your agents don’t just respond, they perform.

Learn to leverage agent-framework, the new unified platform from Semantic Kernel and AutoGen engineering teams, to build A2A compatible agents similar to magnetic-one. Use SWE Agents (GitHub Copilot coding agent and Codex with Azure OpenAI models) to accelerate development. Implement MCP tools for secure enterprise agentic workflows. Experience hands-on building, deploying, and orchestrating multi-agent systems with pre-release capabilities. Note: Contains embargoed content.

Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.

Introducing Microsoft Foundry Tools

AI agents need tools to take actions and complete their workflow; tools that can parse documents, transcribe call recordings, do custom translation--all with LLMs wrapped within them. In this session, we are introducing a new suite of production-ready tools in Microsoft Foundry, designed to seamlessly plug into your agentic AI apps, either using APIs or as MCP servers.

Inference at record speed with Azure ND Virtual Machines

Azure sets new inference records with 865K and 1.1M tokens/sec on ND GB200/GB300 v6 VMs. These results stem from deep stack optimization—from GPU kernels like GEMM and attention to multi-node scaling. Using LLAMA benchmarks, we’ll show how model architecture and hardware codesign drive throughput and efficiency. Customers benefit from faster time-to-value, lower cost per token, and production-ready infrastructure. Attendees can connect with Azure engineers to discuss best practices.

Learn to leverage agent-framework, the new unified platform from Semantic Kernel and AutoGen engineering teams, to build A2A compatible agents similar to magnetic-one. Use SWE Agents (GitHub Copilot coding agent and Codex with Azure OpenAI models) to accelerate development. Implement MCP tools for secure enterprise agentic workflows. Experience hands-on building, deploying, and orchestrating multi-agent systems with pre-release capabilities. Note: Contains embargoed content.

Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.

Over the past few years, we’ve explored using large language models with external data and tools, facing many challenges. The Model Context Protocol (MCP) addresses these by standardizing how data and tools connect. In this session, we’ll demystify MCP, its purpose and architecture, and show how it enables precise tuning of models, contextual reuse, and safe delegation. While designed for developers and leads, it will help anyone assess if MCP fits their LLM projects.

AI builder’s guide to agent development in Foundry Agent Service

Build, operate, and scale AI agents with Foundry Agent Service. Learn how to author agents, connect tools and data, evaluate performance, and deploy to a secure runtime for production. See how to bring OpenAI API–based projects into Foundry with minimal changes while gaining enterprise-grade governance, observability, and interoperability through the Model Context Protocol and agent-to-agent capabilities.

AI performance extends beyond chip metrics; it relies on integrated hardware, software, and infrastructure. Traditional benchmarks fall short, so NVIDIA DGX Cloud Benchmarking offers a standardized framework to evaluate large-scale AI workloads. NVIDIA and Azure present an end-to-end benchmarking workflow, sharing optimization strategies for deploying and tuning production-ready LLMs on Azure.

Agentic AI is swiftly transforming opportunities and risks in financial services. As banks use AI for secure experiences, criminals exploit these same technologies to create sophisticated scams and expand mule networks. OpenAI’s research underscores the urgency of these challenges. In this keynote, BioCatch will show how behavioral biometrics and fraud analytics, powered by Microsoft Cloud, help banks disrupt scams, dismantle mule networks, and rebuild digital trust worldwide.

Learn how partners can build scalable, secure AI solutions with Microsoft Foundry. Integrate models from OpenAI, Cohere, Mistral, Hugging Face, and Meta Llama using Azure Databricks, Cosmos DB, Snowflake, and SQL. Foundry enables orchestration of agents, model customization, and secure data workflows—all within environments like GitHub, Visual Studio, and Copilot Studio.

Build standout AI products fast with Microsoft Foundry—LLMs and Agents. Learn patterns to ship apps grounded on enterprise data via OneLake and connected platforms (Fabric, Snowflake, CosmosDB, SQL, etc.). We’ll cover retrieval, tool-use, guardrails, and evaluation—plus a lean dev loop that turns experiments into production while meeting responsible AI standards.

AI powered automation & multi-agent orchestration in Microsoft Foundry

Build multi?agent systems the right way with Microsoft Foundry. Go from single?agent prototypes to fleet?level orchestration using the Foundry Agent Framework (Semantic Kernel + AutoGen), shared state, Human in the loop, OpenTel, MCP toolchains, A2A, and the Activity Protocol. Bring frameworks like LangGraph and OpenAI Agents SDK, then deploy as containerized, governed, observable agents on Foundry.

Delivered in a silent stage breakout.

As LLMs grow, efficient inference requires multi-node execution—introducing challenges in orchestration, scheduling, and low-latency GPU-to-GPU data transfers. Hardware like the GB200 NVL72 delivers massive scale-up compute, but truly scalable inference also depends on advanced software. Explore how open-source frameworks like NVIDIA Dynamo, combined with Azure’s AKS managed Kubernetes service, unlock new levels of performance and cost-efficiency.

KPMG’s AI-driven platform transforms insurance claims management using Microsoft Azure and OpenAI. Informed by live client use cases, the solution analyzes large datasets, identifies high-value opportunities, and generates actionable insights. The solution improves operational efficiency, accelerates decision-making, and helps insurers unlock hidden value across complex claims portfolios.

The explosive growth of cloud data—and its importance for analytics and AI—demands a new approach to protection and access. Traditional backup tools weren’t built to handle hyperscale workloads, such as Azure Blob Storage and Cosmos DB, resulting in costly silos. Discover how a cloud-native platform delivers hyperscale protection, automates operations, reduces TCO, and turns backups into a live, queryable data lake for analytics in Azure Synapse, Microsoft Fabric, and Azure OpenAI.