talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (22 results)

See all 22 →
Showing 6 results

Activities & events

Title & Speakers Event

Hands-On : MCP (Model Context Protocol) Bootcamp Date: 27th September 2025, 9 AM to 12.30 PM Eastern Time Level: Beginners/Intermediate Registration Link: https://www.eventbrite.com/e/hands-on-mcp-model-context-protocol-bootcamp-tickets-1583073859529?aff=oddtdtcreator Who Should Attend? This hands-on workshop is for developers, senior software engineers, IT pros, architects, IT managers, citizen developers, product managers, IT leaders, enterprise architects, chief analytics officers, CIOs, CTOs, and other decision-makers who want to learn how to seamlessly integrate AI applications and agents into Azure AI Foundry and Microsoft Copilot Studio using the Model Context Protocol (MCP). Experience with C#, Python, or JavaScript is helpful but not required—no prior AI knowledge needed. And while this isn’t a data & analytics-focused session, data scientists, data stewards, and tech-savvy data protection officers will also find it super valuable. Description: MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C gives you a universal way to connect devices to different peripherals and accessories, MCP gives AI models a standardized way to connect to data sources and tools. Why MCP?MCP makes it easier to build agents and complex workflows on top of LLMs. Since LLMs often need to integrate with data and tools, MCP provides:

  • A growing list of pre-built integrations your LLM can plug into directly
  • The flexibility to switch between LLM providers and vendors
  • Best practices for keeping your data secure within your infrastructure

In this half-day, hands-on virtual workshop, Microsoft AI and Business Applications MVP and Microsoft Certified Trainer Prashant G Bhoyar will show you how to use MCP with Azure AI Foundry and Copilot Studio to build powerful AI Agents. Here’s what we’ll cover in detail:

  • What is the Model Context Protocol (MCP)?
  • How to build an MCP server?
  • How to integrate AI applications and agents into Azure AI Foundry and Microsoft Copilot Studio using MCP?
  • How does MCP connect directly to existing knowledge servers and APIs, so actions and knowledge can be automatically added and updated within your agents?
  • How MCP integration simplifies agent building, reduces maintenance, and leverages enterprise-grade security and governance (like Virtual Network integration, Data Loss Prevention policies, and multiple authentication methods)?
  • Best practices for real-world use cases

By the end of this bootcamp, you'll be ready to use MCP to create enterprise-grade agents. The labs will feature a mix of Python, C#, and low-code/no-code UI tools—so even if you don't want to write code, you're covered. Workshop Resources: You’ll get access to Microsoft Copilot, Azure, and Azure OpenAI services (a $500 value) for the hands-on labs. If you already have your own Microsoft Copilot or Azure subscription, you can use that instead. Attendee Workstation Requirements Bring your own computer (Windows or Mac) with:

  • Camera, speakers, microphone, and a reliable internet connection (tablets won’t work for this workshop)
  • A modern browser: Microsoft Edge, Google Chrome, Firefox, or Safari
  • Access to www.azure.com and https://copilot.microsoft.com
  • Nice to have: the ability to run C# 10 or Python code using Visual Studio 2022, VS Code 1.66+, Visual Studio for Mac, Rider, or a similar IDE.
RSVPs are on Eventbrite - Hands-On : MCP (Model Context Protocol) Bootcamp
Serge Gershkovich – Head of Product @ SQL DBM , Tobias Macey – host

Summary In this episode of the Data Engineering Podcast Serge Gershkovich, head of product at SQL DBM, talks about the socio-technical aspects of data modeling. Serge shares his background in data modeling and highlights its importance as a collaborative process between business stakeholders and data teams. He debunks common misconceptions that data modeling is optional or secondary, emphasizing its crucial role in ensuring alignment between business requirements and data structures. The conversation covers challenges in complex environments, the impact of technical decisions on data strategy, and the evolving role of AI in data management. Serge stresses the need for business stakeholders' involvement in data initiatives and a systematic approach to data modeling, warning against relying solely on technical expertise without considering business alignment.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Enterprises today face an enormous challenge: they’re investing billions into Snowflake and Databricks, but without strong foundations, those investments risk becoming fragmented, expensive, and hard to govern. And that’s especially evident in large, complex enterprise data environments. That’s why companies like DirecTV and Pfizer rely on SqlDBM. Data modeling may be one of the most traditional practices in IT, but it remains the backbone of enterprise data strategy. In today’s cloud era, that backbone needs a modern approach built natively for the cloud, with direct connections to the very platforms driving your business forward. Without strong modeling, data management becomes chaotic, analytics lose trust, and AI initiatives fail to scale. SqlDBM ensures enterprises don’t just move to the cloud—they maximize their ROI by creating governed, scalable, and business-aligned data environments. If global enterprises are using SqlDBM to tackle the biggest challenges in data management, analytics, and AI, isn’t it worth exploring what it can do for yours? Visit dataengineeringpodcast.com/sqldbm to learn more.Your host is Tobias Macey and today I'm interviewing Serge Gershkovich about how and why data modeling is a sociotechnical endeavorInterview IntroductionHow did you get involved in the area of data management?Can you start by describing the activities that you think of when someone says the term "data modeling"?What are the main groupings of incomplete or inaccurate definitions that you typically encounter in conversation on the topic?How do those conceptions of the problem lead to challenges and bottlenecks in execution?Data modeling is often associated with data warehouse design, but it also extends to source systems and unstructured/semi-structured assets. How does the inclusion of other data localities help in the overall success of a data/domain modeling effort?Another aspect of data modeling that often consumes a substantial amount of debate is which pattern to adhere to (star/snowflake, data vault, one big table, anchor modeling, etc.). What are some of the ways that you have found effective to remove that as a stumbling block when first developing an organizational domain representation?While the overall purpose of data modeling is to provide a digital representation of the business processes, there are inevitable technical decisions to be made. What are the most significant ways that the underlying technical systems can help or hinder the goals of building a digital twin of the business?What impact (positive and negative) are you seeing from the introduction of LLMs into the workflow of data modeling?How does tool use (e.g. MCP connection to warehouse/lakehouse) help when developing the transformation logic for achieving a given domain representation? What are the most interesting, innovative, or unexpected ways that you have seen organizations address the data modeling lifecycle?What are the most interesting, unexpected, or challenging lessons that you have learned while working with organizations implementing a data modeling effort?What are the overall trends in the ecosystem that you are monitoring related to data modeling practices?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Links sqlDBMSAPJoe ReisERD == Entity Relation DiagramMaster Data ManagementdbtData ContractsData Modeling With Snowflake book by Serge (affiliate link)Type 2 DimensionData VaultStar SchemaAnchor ModelingRalph KimballBill InmonSixth Normal FormMCP == Model Context ProtocolThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

AI/ML Analytics Cloud Computing Data Engineering Data Lakehouse Data Management Data Modelling Data Vault Databricks Datafold DWH LLM Snowflake SQL
Data Engineering Podcast

Build Your Own LLM Powered Data Analytics Agent — Using Free & Open-Source Tools

As large language models continue to reshape the data landscape, one of the most exciting applications is creating intelligent data analytics assistants that make querying and exploring data as simple as asking a question. In this hands-on session, you’ll learn how to build your own interactive assistant using free, open-source tools — no paid licenses or proprietary systems required. We’ll guide you through connecting your data to a language model to enable natural language queries that return automated insights, visualizations, and summaries. Whether you’re a data analyst, business user, or enthusiast, this session will help you turn static datasets into dynamic, conversational experiences. You’ll also see a live demo of an AI-powered agent processing queries, performing analysis, and returning visual insights — with minimal setup and no complex coding. We’ll share practical design tips to make your assistant more reliable, interpretable, and scalable. What We Will Cover:

  • Understand how LLMs are transforming data analytics workflows, from dashboards to conversational interfaces
  • Learn how to build a data analytics agent with LLMs using free, open-source tools
  • See a live demo of an LLM-powered agent responding to user queries and generating real-time insights and visualizations
  • Explore real-world applications of LLM analytics agents in business intelligence, reporting, and decision support
  • Discover practical strategies for scaling your LLM-based data assistant across different data types and user roles
  • Interactive Element: Through live demos and audience Q&A, participants will gain hands-on experience building LLM analytics agents, and leave with the tools and framework to start building their own immediately.
Building a Data Analytics Agent using LLMs

Build Your Own LLM Powered Data Analytics Agent — Using Free & Open-Source Tools

As large language models continue to reshape the data landscape, one of the most exciting applications is creating intelligent data analytics assistants that make querying and exploring data as simple as asking a question. In this hands-on session, you’ll learn how to build your own interactive assistant using free, open-source tools — no paid licenses or proprietary systems required. We’ll guide you through connecting your data to a language model to enable natural language queries that return automated insights, visualizations, and summaries. Whether you’re a data analyst, business user, or enthusiast, this session will help you turn static datasets into dynamic, conversational experiences. You’ll also see a live demo of an AI-powered agent processing queries, performing analysis, and returning visual insights — with minimal setup and no complex coding. We’ll share practical design tips to make your assistant more reliable, interpretable, and scalable. What We Will Cover:

  • Understand how LLMs are transforming data analytics workflows, from dashboards to conversational interfaces
  • Learn how to build a data analytics agent with LLMs using free, open-source tools
  • See a live demo of an LLM-powered agent responding to user queries and generating real-time insights and visualizations
  • Explore real-world applications of LLM analytics agents in business intelligence, reporting, and decision support
  • Discover practical strategies for scaling your LLM-based data assistant across different data types and user roles
  • Interactive Element: Through live demos and audience Q&A, participants will gain hands-on experience building LLM analytics agents, and leave with the tools and framework to start building their own immediately.
Building a Data Analytics Agent using LLMs

Build Your Own LLM Powered Data Analytics Agent — Using Free & Open-Source Tools

As large language models continue to reshape the data landscape, one of the most exciting applications is creating intelligent data analytics assistants that make querying and exploring data as simple as asking a question. In this hands-on session, you’ll learn how to build your own interactive assistant using free, open-source tools — no paid licenses or proprietary systems required. We’ll guide you through connecting your data to a language model to enable natural language queries that return automated insights, visualizations, and summaries. Whether you’re a data analyst, business user, or enthusiast, this session will help you turn static datasets into dynamic, conversational experiences. You’ll also see a live demo of an AI-powered agent processing queries, performing analysis, and returning visual insights — with minimal setup and no complex coding. We’ll share practical design tips to make your assistant more reliable, interpretable, and scalable. What We Will Cover:

  • Understand how LLMs are transforming data analytics workflows, from dashboards to conversational interfaces
  • Learn how to build a data analytics agent with LLMs using free, open-source tools
  • See a live demo of an LLM-powered agent responding to user queries and generating real-time insights and visualizations
  • Explore real-world applications of LLM analytics agents in business intelligence, reporting, and decision support
  • Discover practical strategies for scaling your LLM-based data assistant across different data types and user roles
  • Interactive Element: Through live demos and audience Q&A, participants will gain hands-on experience building LLM analytics agents, and leave with the tools and framework to start building their own immediately.
Building a Data Analytics Agent using LLMs
Tobias Macey – host , Alex Albu – Tech lead for AI initiatives @ Starburst

Summary In this episode of the Data Engineering Podcast Alex Albu, tech lead for AI initiatives at Starburst, talks about integrating AI workloads with the lakehouse architecture. From his software engineering roots to leading data engineering efforts, Alex shares insights on enhancing Starburst's platform to support AI applications, including an AI agent for data exploration and using AI for metadata enrichment and workload optimization. He discusses the challenges of integrating AI with data systems, innovations like SQL functions for AI tasks and vector databases, and the limitations of traditional architectures in handling AI workloads. Alex also shares his vision for the future of Starburst, including support for new data formats and AI-driven data exploration tools.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.This is a pharmaceutical Ad for Soda Data Quality. Do you suffer from chronic dashboard distrust? Are broken pipelines and silent schema changes wreaking havoc on your analytics? You may be experiencing symptoms of Undiagnosed Data Quality Syndrome — also known as UDQS. Ask your data team about Soda. With Soda Metrics Observability, you can track the health of your KPIs and metrics across the business — automatically detecting anomalies before your CEO does. It’s 70% more accurate than industry benchmarks, and the fastest in the category, analyzing 1.1 billion rows in just 64 seconds. And with Collaborative Data Contracts, engineers and business can finally agree on what “done” looks like — so you can stop fighting over column names, and start trusting your data again.Whether you’re a data engineer, analytics lead, or just someone who cries when a dashboard flatlines, Soda may be right for you. Side effects of implementing Soda may include: Increased trust in your metrics, reduced late-night Slack emergencies, spontaneous high-fives across departments, fewer meetings and less back-and-forth with business stakeholders, and in rare cases, a newfound love of data. Sign up today to get a chance to win a $1000+ custom mechanical keyboard. Visit dataengineeringpodcast.com/soda to sign up and follow Soda’s launch week. It starts June 9th. This episode is brought to you by Coresignal, your go-to source for high-quality public web data to power best-in-class AI products. Instead of spending time collecting, cleaning, and enriching data in-house, use ready-made multi-source B2B data that can be smoothly integrated into your systems via APIs or as datasets. With over 3 billion data records from 15+ online sources, Coresignal delivers high-quality data on companies, employees, and jobs. It is powering decision-making for more than 700 companies across AI, investment, HR tech, sales tech, and market intelligence industries. A founding member of the Ethical Web Data Collection Initiative, Coresignal stands out not only for its data quality but also for its commitment to responsible data collection practices. Recognized as the top data provider by Datarade for two consecutive years, Coresignal is the go-to partner for those who need fresh, accurate, and ethically sourced B2B data at scale. Discover how Coresignal's data can enhance your AI platforms. Visit dataengineeringpodcast.com/coresignal to start your free 14-day trial.Your host is Tobias Macey and today I'm interviewing Alex Albu about how Starburst is extending the lakehouse to support AI workloadsInterview IntroductionHow did you get involved in the area of data management?Can you start by outlining the interaction points of AI with the types of data workflows that you are supporting with Starburst?What are some of the limitations of warehouse and lakehouse systems when it comes to supporting AI systems?What are the points of friction for engineers who are trying to employ LLMs in the work of maintaining a lakehouse environment?Methods such as tool use (exemplified by MCP) are a means of bolting on AI models to systems like Trino. What are some of the ways that is insufficient or cumbersome?Can you describe the technical implementation of the AI-oriented features that you have incorporated into the Starburst platform?What are the foundational architectural modifications that you had to make to enable those capabilities?For the vector storage and indexing, what modifications did you have to make to iceberg?What was your reasoning for not using a format like Lance?For teams who are using Starburst and your new AI features, what are some examples of the workflows that they can expect?What new capabilities are enabled by virtue of embedding AI features into the interface to the lakehouse?What are the most interesting, innovative, or unexpected ways that you have seen Starburst AI features used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI features for Starburst?When is Starburst/lakehouse the wrong choice for a given AI use case?What do you have planned for the future of AI on Starburst?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links StarburstPodcast EpisodeAWS AthenaMCP == Model Context ProtocolLLM Tool UseVector EmbeddingsRAG == Retrieval Augmented GenerationAI Engineering Podcast EpisodeStarburst Data ProductsLanceLanceDBParquetORCpgvectorStarburst IcehouseThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

AI/ML Analytics API Dashboard Data Collection Data Contracts Data Engineering Data Lakehouse Data Management Data Quality Datafold Iceberg KPI Lance LLM Python SQL Trino Vector DB
Data Engineering Podcast
Showing 6 results