talk-data.com talk-data.com

Topic

RAG

Retrieval Augmented Generation (RAG)

ai machine_learning llm

51

tagged

Activity Trend

83 peak/qtr
2020-Q1 2026-Q1

Activities

51 activities · Newest first

A carreira de AI Engineer se consolidou como uma das mais disputadas do mercado de tecnologia. Mas afinal, o que realmente é esperado desse profissional na prática? Neste episódio do Data Hackers, discutimos em profundidade o caminho para se tornar um AI Engineer, analisando as principais habilidades técnicas, as diferenças em relação a outros cargos da área de dados e engenharia, a formação acadêmica versus experiência prática, a rotina nas empresas e o impacto da IA Generativa, RAG e AI Agents no dia a dia da função. Para enriquecer o debate, utilizamos dados da pesquisa State of Data Brazil como base para entender o cenário atual do mercado brasileiro, identificar tendências de demanda por habilidades, perfis profissionais mais buscados e os principais desafios enfrentados por quem deseja ingressar ou evoluir nessa carreira. Se você quer migrar para IA, se preparar para oportunidades reais ou entender se esse é o próximo passo profissional em dados, este episódio é para você. Não se esqueça de preencher a pesquisa State of Data Brazil: https://www.stateofdata.com.br/

Nossa Bancada Data Hackers: Paulo Vasconcellos — Co-founder da Data Hackers e Principal Data Scientist na Hotmart.Gabriel Lages — Co-founder da Data Hacker e Diretor de Dados & AI da Hotmart

Send us a text We go inside Mediahuis to see how a small GenAI team is transforming newsroom workflows without losing editorial judgment. From RAG search to headline suggestions and text‑to‑video assists, this episode shares what works, what doesn’t, and how adoption spreads across brands. You’ll hear about: Ten priority use cases shipped across the groupHeadline and summary suggestions that boost clarity and speedRAG‑powered search turning archives into instant contextText‑to‑video tools that free up local video teamsThe hurdles of adoption, quality, and scaling prototypes into productionTheir playbook blends engineering discipline with editorial empathy: use rules where you can, prompt carefully when you must, and always keep journalists in the loop. We also cover policies, guardrails, AI literacy, and how to survive model churn with reusable templates and grounded tests. The result: a practical path to AI in media — protecting judgment, raising quality, and scaling tools without losing each brand’s voice. 🎧 If this sparks ideas for your newsroom or product team, follow the show, share with a colleague, and leave a quick review with your favorite takeaway.

Summary  In this episode Preeti Somal, EVP of Engineering at Temporal, talks about the durable execution model and how it reshapes the way teams build reliable, stateful systems for data and AI. She explores Temporal’s code‑first programming model—workflows, activities, task queues, and replay—and how it eliminates hand‑rolled retry, checkpoint, and error‑handling scaffolding while letting data remain where it lives. Preeti shares real-world patterns for replacing DAG-first orchestration, integrating application and data teams through signals and Nexus for cross-boundary calls, and using Temporal to coordinate long-running, human-in-the-loop, and agentic AI workflows with full observability and auditability. Shee also discusses heuristics for choosing Temporal alongside (or instead of) traditional orchestrators, managing scale without moving large datasets, and lessons from running durable execution as a cloud service. 

Announcements  Hello and welcome to the Data Engineering Podcast, the show about modern data managementData teams everywhere face the same problem: they're forcing ML models, streaming data, and real-time processing through orchestration tools built for simple ETL. The result? Inflexible infrastructure that can't adapt to different workloads. That's why Cash App and Cisco rely on Prefect. Cash App's fraud detection team got what they needed - flexible compute options, isolated environments for custom packages, and seamless data exchange between workflows. Each model runs on the right infrastructure, whether that's high-memory machines or distributed compute. Orchestration is the foundation that determines whether your data team ships or struggles. ETL, ML model training, AI Engineering, Streaming - Prefect runs it all from ingestion to activation in one platform. Whoop and 1Password also trust Prefect for their data operations. If these industry leaders use Prefect for critical workflows, see what it can do for you at dataengineeringpodcast.com/prefect.Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Composable data infrastructure is great, until you spend all of your time gluing it together. Bruin is an open source framework, driven from the command line, that makes integration a breeze. Write Python and SQL to handle the business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. Bruin allows you to build end-to-end data workflows using AI, has connectors for hundreds of platforms, and helps data teams deliver faster. Teams that use Bruin need less engineering effort to process data and benefit from a fully integrated data platform. Go to dataengineeringpodcast.com/bruin today to get started. And for dbt Cloud customers, they'll give you $1,000 credit to migrate to Bruin Cloud.Your host is Tobias Macey and today I'm interviewing Preeti Somal about how to incorporate durable execution and state management into AI application architectures Interview   IntroductionHow did you get involved in the area of data management?Can you describe what durable execution is and how it impacts system architecture?With the strong focus on state maintenance and high reliability, what are some of the most impactful ways that data teams are incorporating tools like Temporal into their work?One of the core primitives in Temporal is a "workflow". How does that compare to similar primitives in common data orchestration systems such as Airflow, Dagster, Prefect, etc.?  What are the heuristics that you recommend when deciding which tool to use for a given task, particularly in data/pipeline oriented projects? Even if a team is using a more data-focused orchestration engine, what are some of the ways that Temporal can be applied to handle the processing logic of the actual data?AI applications are also very dependent on reliable data to be effective in production contexts. What are some of the design patterns where durable execution can be integrated into RAG/agent applications?What are some of the conceptual hurdles that teams experience when they are starting to adopt Temporal or other durable execution frameworks?What are the most interesting, innovative, or unexpected ways that you have seen Temporal/durable execution used for data/AI services?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Temporal?When is Temporal/durable execution the wrong choice?What do you have planned for the future of Temporal for data and AI systems? Contact Info   LinkedIn Parting Question   From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements   Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story. Links   TemporalDurable ExecutionFlinkMachine Learning EpochSpark StreamingAirflowDirected Acyclic Graph (DAG)Temporal NexusTensorZeroAI Engineering Podcast Episode The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA  

Summary In this episode of the Data Engineering Podcast Omri Lifshitz (CTO) and Ido Bronstein (CEO) of Upriver talk about the growing gap between AI's demand for high-quality data and organizations' current data practices. They discuss why AI accelerates both the supply and demand sides of data, highlighting that the bottleneck lies in the "middle layer" of curation, semantics, and serving. Omri and Ido outline a three-part framework for making data usable by LLMs and agents: collect, curate, serve, and share challenges of scaling from POCs to production, including compounding error rates and reliability concerns. They also explore organizational shifts, patterns for managing context windows, pragmatic views on schema choices, and Upriver's approach to building autonomous data workflows using determinism and LLMs at the right boundaries. The conversation concludes with a look ahead to AI-first data platforms where engineers supervise business semantics while automation stitches technical details end-to-end.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData teams everywhere face the same problem: they're forcing ML models, streaming data, and real-time processing through orchestration tools built for simple ETL. The result? Inflexible infrastructure that can't adapt to different workloads. That's why Cash App and Cisco rely on Prefect. Cash App's fraud detection team got what they needed - flexible compute options, isolated environments for custom packages, and seamless data exchange between workflows. Each model runs on the right infrastructure, whether that's high-memory machines or distributed compute. Orchestration is the foundation that determines whether your data team ships or struggles. ETL, ML model training, AI Engineering, Streaming - Prefect runs it all from ingestion to activation in one platform. Whoop and 1Password also trust Prefect for their data operations. If these industry leaders use Prefect for critical workflows, see what it can do for you at dataengineeringpodcast.com/prefect.Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Composable data infrastructure is great, until you spend all of your time gluing it together. Bruin is an open source framework, driven from the command line, that makes integration a breeze. Write Python and SQL to handle the business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. Bruin allows you to build end-to-end data workflows using AI, has connectors for hundreds of platforms, and helps data teams deliver faster. Teams that use Bruin need less engineering effort to process data and benefit from a fully integrated data platform. Go to dataengineeringpodcast.com/bruin today to get started. And for dbt Cloud customers, they'll give you $1,000 credit to migrate to Bruin Cloud.Your host is Tobias Macey and today I'm interviewing Omri Lifshitz and Ido Bronstein about the challenges of keeping up with the demand for data when supporting AI systemsInterview IntroductionHow did you get involved in the area of data management?We're here to talk about "The Growing Gap Between Data & AI". From your perspective, what is this gap, and why do you think it's widening so rapidly right now?How does this gap relate to the founding story of Upriver? What problems were you and your co-founders experiencing that led you to build this?The core premise of new AI tools, from RAG pipelines to LLM agents, is that they are only as good as the data they're given. How does this "garbage in, garbage out" problem change when the "in" is not a static file but a complex, high-velocity, and constantly changing data pipeline?Upriver is described as an "intelligent agent system" and an "autonomous data engineer." This is a fascinating "AI to solve for AI" approach. Can you describe this agent-based architecture and how it specifically works to bridge that data-AI gap?Your website mentions a "Data Context Layer" that turns "tribal knowledge" into a "machine-usable mode." This sounds critical for AI. How do you capture that context, and how does it make data "AI-ready" in a way that a traditional data catalog or quality tool doesn't?What are the most innovative or unexpected ways you've seen companies trying to make their data "AI-ready"? And where are the biggest points of failure you observe?What has been the most challenging or unexpected lesson you've learned while building an AI system (Upriver) that is designed to fix the data foundation for other AI systems?When is an autonomous, agent-based approach not the right solution for a team's data quality problems? What organizational or technical maturity is required to even start closing this data-AI gap?What do you have planned for the future of Upriver? And looking more broadly, how do you see this gap between data and AI evolving over the next few years?Contact Info Ido - LinkedInOmri - LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links UpriverRAG == Retrieval Augmented GenerationAI Engineering Podcast EpisodeAI AgentContext WindowModel Finetuning)The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

The promise of AI in enterprise settings is enormous, but so are the privacy and security challenges. How do you harness AI's capabilities while keeping sensitive data protected within your organization's boundaries? Private AI—using your own models, data, and infrastructure—offers a solution, but implementation isn't straightforward. What governance frameworks need to be in place? How do you evaluate non-deterministic AI systems? When should you build in-house versus leveraging cloud services? As data and software teams evolve in this new landscape, understanding the technical requirements and workflow changes is essential for organizations looking to maintain control over their AI destiny. Manasi Vartak is Chief AI Architect and VP of Product Management (AI Platform) at Cloudera. She is a product and AI leader with more than a decade of experience at the intersection of AI infrastructure, enterprise software, and go-to-market strategy. At Cloudera, she leads product and engineering teams building low-code and high-code generative AI platforms, driving the company’s enterprise AI strategy and enabling trusted AI adoption across global organizations. Before joining Cloudera through its acquisition of Verta, Manasi was the founder and CEO of Verta, where she transformed her MIT research into enterprise-ready ML infrastructure. She scaled the company to multi-million ARR, serving Fortune 500 clients in finance, insurance, and capital markets, and led the launch of enterprise MLOps and GenAI products used in mission-critical workloads. Manasi earned her PhD in Computer Science from MIT, where she pioneered model management systems such as ModelDB — foundational work that influenced the development of tools like MLflow. Earlier in her career, she held research and engineering roles at Twitter, Facebook, Google, and Microsoft. In the episode, Richie and Manasi explore AI's role in financial services, the challenges of AI adoption in enterprises, the importance of data governance, the evolving skills needed for AI development, the future of AI agents, and much more. Links Mentioned in the Show: ClouderaCloudera Evolve ConferenceCloudera Agent StudioConnect with ManasiCourse: Introduction to AI AgentsRelated Episode: RAG 2.0 and The New Era of RAG Agents with Douwe Kiela, CEO at Contextual AI & Adjunct Professor at Stanford UniversityRewatch RADAR AI  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

In this talk, Hugo Bowne-Anderson, an independent data and AI consultant, educator, and host of the podcasts Vanishing Gradients and High Signal, shares his journey from academic research and curriculum design at DataCamp to advising teams at Netflix, Meta, and the US Air Force. Together, we explore how to build reliable, production-ready AI systems—from prompt evaluation and dataset design to embedding agents into everyday workflows.

You’ll learn about: How to structure teams and incentives for successful AI adoptionPractical prompting techniques for accurate timestamp and data generationBuilding and maintaining evaluation sets to avoid “prompt overfitting”- Cost-effective methods for LLM evaluation and monitoringTools and frameworks for debugging and observing AI behavior (Logfire, Braintrust, Phoenix Arise)The evolution of AI agents—from simple RAG systems to proactive, embedded assistantsHow to escape “proof of concept purgatory” and prioritize AI projects that drive business valueStep-by-step guidance for building reliable, evaluable AI agents This session is ideal for AI engineers, data scientists, ML product managers, and startup founders looking to move beyond experimentation into robust, scalable AI systems. Whether you’re optimizing RAG pipelines, evaluating prompts, or embedding AI into products, this talk offers actionable frameworks to guide you from concept to production.

LINKS Escaping POC Purgatory: Evaluation-Driven Development for AI Systems - https://www.oreilly.com/radar/escaping-poc-purgatory-evaluation-driven-development-for-ai-systems/Stop Building AI Agents - https://www.decodingai.com/p/stop-building-ai-agentsHow to Evaluate LLM Apps Before You Launch - https://www.youtube.com/watch?si=90fXJJQThSwGCaYv&v=TTr7zPLoTJI&feature=youtu.beMy Vanishing Gradients Substack - https://hugobowne.substack.com/Building LLM Applications for Data Scientists and Software Engineers https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=datatalksclub TIMECODES: 00:00 Introduction and Expertise 04:04 Transition to Freelance Consulting and Advising 08:49 Restructuring Teams and Incentivizing AI Adoption 12:22 Improving Prompting for Timestamp Generation 17:38 Evaluation Sets and Failure Analysis for Reliable Software 23:00 Evaluating Prompts: The Cost and Size of Gold Test Sets 27:38 Software Tools for Evaluation and Monitoring 33:14 Evolution of AI Tools: Proactivity and Embedded Agents 40:12 The Future of AI is Not Just Chat 44:38 Avoiding Proof of Concept Purgatory: Prioritizing RAG for Business Value 50:19 RAG vs. Agents: Complexity and Power Trade-Offs 56:21 Recommended Steps for Building Agents 59:57 Defining Memory in Multi-Turn Conversations

Connect with Hugo Twitter - https://x.com/hugobowneLinkedin - https://www.linkedin.com/in/hugo-bowne-anderson-045939a5/Github - https://github.com/hugobowneWebsite - https://hugobowne.github.io/ Connect with DataTalks.Club: Join the community - https://datatalks.club/slack.htmlSubscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQCheck other upcoming events - https://lu.ma/dtc-eventsGitHub: https://github.com/DataTalksClub- LinkedIn - https://www.linkedin.com/company/datatalks-club/ Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/

Send us a text We're joined by Douwe Kiela, CEO of Contextual.ai and pioneer in RAG research. From deploying AI agents at Fortune 500 companies to shedding light on data privacy and security, Douwe shares his expertise and insights on how to make data simple, effective, and secure. 00:46 Introducing Douwe Kiela 01:37 RAG - Here to Stay or Go? 06:59 LLMs with Context 08:20 Making AI Successful 10:34 Why Contextual AI? 17:18 LLM versus SLMs 20:28 Speed over Perfection 22:07 Hallucinations 26:02 Making AI Easy to Consume 28:50 Defining an Agent 32:53 Reaching Contextual AI 33:14 The Contrarian View 34:37 The Risks of AI 36:53 For Fun

LinkedIn: linkedin.com/in/douwekiela Website: https://contextual.ai/   Want to be featured as a guest on Making Data Simple?  Reach out to us at [email protected] and tell us why you should be next.  The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.  Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

In this episode, we talked with Ranjitha Kulkarni, a machine learning engineer with a rich career spanning Microsoft, Dropbox, and now NeuBird AI. Ranjitha shares her journey into ML and NLP, her work building recommendation systems, early AI agents, and cutting-edge LLM-powered products. She offers insights into designing reliable AI systems in the new era of generative AI and agents, and how context engineering and dynamic planning shape the future of AI products.TIMECODES00:00 Career journey and early curiosity04:25 Speech recognition at Microsoft05:52 Recommendation systems and early agents at Dropbox07:44 Joining NewBird AI12:01 Defining agents and LLM orchestration16:11 Agent planning strategies18:23 Agent implementation approaches22:50 Context engineering essentials30:27 RAG evolution in agent systems37:39 RAG vs agent use cases40:30 Dynamic planning in AI assistants43:00 AI productivity tools at Dropbox46:00 Evaluating AI agents53:20 Reliable tool usage challenges58:17 Future of agents in engineering Connect with Ranjitha- Linkedin - https://www.linkedin.com/in/ranjitha-gurunath-kulkarniConnect with DataTalks.Club:- Join the community - https://datatalks.club/slack.html- Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ- Check other upcoming events - https://lu.ma/dtc-events- GitHub: https://github.com/DataTalksClub- LinkedIn - https://www.linkedin.com/company/datatalks-club/ - Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/

Are dashboards dead? For complex enterprise use cases, the answer might be yes. In this episode, I'm joined by Irina Malkova (VP Data & AI at Salesforce), to discuss her team's transformational journey from building complex dashboards to deploying AI-powered conversational agents. We dive deep into how this shift is not just a change in tooling, but a fundamental change in how users access insights and how data teams measure their impact.

Join us as we cover: The Shift from Dashboards to Agents: We discuss why dashboards can create a high cognitive load and fail users in complex scenarios , and how conversational agents in the flow of work (like Slack) provide targeted, actionable insights and boost adoption.What is Product Telemetry?: Irina explains how telemetry is evolving from a simple engineering observability use case to a critical data source for AI, machine learning, and recommendation systems.Why Standard RAG Fails in the Enterprise: Irina shares why typical RAG approaches break down on dense, entity-rich corporate data (like Salesforce's help docs) where semantic similarity isn't enough, leading to the rise of Graph RAG.The New, Measurable ROI of Data: How moving from BI to agents allows data teams to precisely measure impact, track downstream actions, and finally have a concrete answer to the ROI question that was previously impossible to justify.Data Teams as Enterprise Leaders: Why data teams are uniquely positioned to lead AI transformation, as they hold the enterprise "ontology" and have experience building products under uncertainty.

Send us a text This week, we’re rewinding one of our most talked-about episodes! Richmond Alake, Developer Advocate at MongoDB, joins us to explore how databases power the future of AI. From RAG best practices to the truth behind AGI hype, Richmond breaks down what it takes to build systems that scale — and think. Show Notes 02:05 Meet Rich Alake 03:57 A Developer Advocate at MongoDB 05:57 Passions and Fate! 08:52 AI Hype 13:14 Oh No… AGI Again 17:30 What Makes an AI Database? 20:42 Use Cases 25:41 RAG Best Practices 27:40 The Role of Databases 30:05 Why MongoDB Does It Better 32:43 What’s Next 36:13 Advice on Continuous Learning 38:44 Where to Find RichConnect with Richmond: 🔗 LinkedIn 🌐 MongoDB Website 🧠 Register for MongoDB 🤖 AI Agents Article 💾 Best Repo for AI Developers Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

In this episode, I sit down with Saket Saurabh (CEO of Nexla) to discuss the fundamental shift happening in the AI landscape. The conversation is moving beyond the race to build the biggest foundational models and towards a new battleground: context. We explore what it means to be a "model company" versus a "context company" and how this changes everything for data strategy and enterprise AI.

Join us as we cover: Model vs. Context Companies: The emerging divide between companies building models (like OpenAI) and those whose advantage lies in their unique data and integrations. The Limits of Current Models: Why we might be hitting an asymptote with the current transformer architecture for solving complex, reliable business processes. "Context Engineering": What this term really means, from RAG to stitching together tools, data, and memory to feed AI systems. The Resurgence of Knowledge Graphs: Why graph databases are becoming critical for providing deterministic, reliable information to probabilistic AI models, moving beyond simple vector similarity. AI's Impact on Tooling: How tools like Lovable and Cursor are changing workflows for prototyping and coding, and the risk of creating the "-10x engineer." The Future of Data Engineering: How the field is expanding as AI becomes the primary consumer of data, requiring a new focus on architecture, semantics, and managing complexity at scale.

Summary In this episode of the AI Engineering Podcast Mark Brooker, VP and Distinguished Engineer at AWS, talks about how agentic workflows are transforming database usage and infrastructure design. He discusses the evolving role of data in AI systems, from traditional models to more modern approaches like vectors, RAG, and relational databases. Mark explains why agents require serverless, elastic, and operationally simple databases, and how AWS solutions like Aurora and DSQL address these needs with features such as rapid provisioning, automated patching, geodistribution, and spiky usage. The conversation covers topics including tool calling, improved model capabilities, state in agents versus stateless LLM calls, and the role of Lambda and AgentCore for long-running, session-isolated agents. Mark also touches on the shift from local MCP tools to secure, remote endpoints, the rise of object storage as a durable backplane, and the need for better identity and authorization models. The episode highlights real-world patterns like agent-driven SQL fuzzing and plan analysis, while identifying gaps in simplifying data access, hardening ops for autonomous systems, and evolving serverless database ergonomics to keep pace with agentic development.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData teams everywhere face the same problem: they're forcing ML models, streaming data, and real-time processing through orchestration tools built for simple ETL. The result? Inflexible infrastructure that can't adapt to different workloads. That's why Cash App and Cisco rely on Prefect. Cash App's fraud detection team got what they needed - flexible compute options, isolated environments for custom packages, and seamless data exchange between workflows. Each model runs on the right infrastructure, whether that's high-memory machines or distributed compute. Orchestration is the foundation that determines whether your data team ships or struggles. ETL, ML model training, AI Engineering, Streaming - Prefect runs it all from ingestion to activation in one platform. Whoop and 1Password also trust Prefect for their data operations. If these industry leaders use Prefect for critical workflows, see what it can do for you at dataengineeringpodcast.com/prefect.Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Marc Brooker about the impact of agentic workflows on database usage patterns and how they change the architectural requirements for databasesInterview IntroductionHow did you get involved in the area of data management?Can you describe what the role of the database is in agentic workflows?There are numerous types of databases, with relational being the most prevalent. How does the type and purpose of an agent inform the type of database that should be used?Anecdotally I have heard about how agentic workloads have become the predominant "customers" of services like Neon and Fly.io. How would you characterize the different patterns of scale for agentic AI applications? (e.g. proliferation of agents, monolithic agents, multi-agent, etc.)What are some of the most significant impacts on workload and access patterns for data storage and retrieval that agents introduce?What are the categorical differences in that behavior as compared to programmatic/automated systems?You have spent a substantial amount of time on Lambda at AWS. Given that LLMs are effectively stateless, how does the added ephemerality of serverless functions impact design and performance considerations around having to "re-hydrate" context when interacting with agents?What are the most interesting, innovative, or unexpected ways that you have seen serverless and database systems used for agentic workloads?What are the most interesting, unexpected, or challenging lessons that you have learned while working on technologies that are supporting agentic applications?Contact Info BlogLinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links AWS Aurora DSQLAWS LambdaThree Tier ArchitectureVector DatabaseGraph DatabaseRelational DatabaseVector EmbeddingRAG == Retrieval Augmented GenerationAI Engineering Podcast EpisodeGraphRAGAI Engineering Podcast EpisodeLLM Tool CallingMCP == Model Context ProtocolA2A == Agent 2 Agent ProtocolAWS Bedrock AgentCoreStrandsLangChainKiroThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Send us a text What if AI could tap into live operational data — without ETL or RAG? In this episode, Deepti Srivastava, founder of Snow Leopard, reveals how her company is transforming enterprise data access with intelligent data retrieval, semantic intelligence, and a governance-first approach. Tune in for a fresh perspective on the future of AI and the startup journey behind it.

We explore how companies are revolutionizing their data access and AI strategies. Deepti Srivastava, founder of Snow Leopard, shares her insights on bridging the gap between live operational data and generative AI — and how it’s changing the game for enterprises worldwide. We dive into Snow Leopard’s innovative approach to data retrieval, semantic intelligence, and governance-first architecture. 04:54 Meeting Deepti Srivastava 14:06 AI with No ETL, no RAG 17:11 Snow Leopard's Intelligent Data Fetching 19:00 Live Query Challenges 21:01 Snow Leopard's Secret Sauce 22:14 Latency 23:48 Schema Changes 25:02 Use Cases 26:06 Snow Leopard's Roadmap 29:16 Getting Started 33:30 The Startup Journey 34:12 A Woman in Technology 36:03 The Contrarian View🔗 LinkedIn: https://www.linkedin.com/in/thedeepti/ 🔗 Website:  https://www.snowleopard.ai/ Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

At Berlin Buzzwords, industry voices highlighted how search is evolving with AI and LLMs.

  • Kacper Łukawski (Qdrant) stressed hybrid search (semantic + keyword) as core for RAG systems and promoted efficient embedding models for smaller-scale use.
  • Manish Gill (ClickHouse) discussed auto-scaling OLAP databases on Kubernetes, combining infrastructure and database knowledge.
  • André Charton (Kleinanzeigen) reflected on scaling search for millions of classifieds, moving from Solr/Elasticsearch toward vector search, while returning to a hands-on technical role.
  • Filip Makraduli (Superlinked) introduced a vector-first framework that fuses multiple encoders into one representation for nuanced e-commerce and recommendation search.
  • Brian Goldin (Voyager Search) emphasized spatial context in retrieval, combining geospatial data with AI enrichment to add the “where” to search.
  • Atita Arora (Voyager Search) highlighted geospatial AI models, the renewed importance of retrieval in RAG, and the cautious but promising rise of AI agents.

Together, their perspectives show a common thread: search is regaining center stage in AI—scaling, hybridization, multimodality, and domain-specific enrichment are shaping the next generation of retrieval systems.

Kacper Łukawski Senior Developer Advocate at Qdrant, he educates users on vector and hybrid search. He highlighted Qdrant’s support for dense and sparse vectors, the role of search with LLMs, and his interest in cost-effective models like static embeddings for smaller companies and edge apps. Connect: https://www.linkedin.com/in/kacperlukawski/

Manish Gill
Engineering Manager at ClickHouse, he spoke about running ClickHouse on Kubernetes, tackling auto-scaling and stateful sets. His team focuses on making ClickHouse scale automatically in the cloud. He credited its speed to careful engineering and reflected on the shift from IC to manager.
Connect: https://www.linkedin.com/in/manishgill/

André Charton
Head of Search at Kleinanzeigen, he discussed shaping the company’s search tech—moving from Solr to Elasticsearch and now vector search with Vespa. Kleinanzeigen handles 60M items, 1M new listings daily, and 50k requests/sec. André explained his career shift back to hands-on engineering.
Connect: https://www.linkedin.com/in/andrecharton/

Filip Makraduli
Founding ML DevRel engineer at Superlinked, an open-source framework for AI search and recommendations. Its vector-first approach fuses multiple encoders (text, images, structured fields) into composite vectors for single-shot retrieval. His Berlin Buzzwords demo showed e-commerce search with natural-language queries and filters.
Connect: https://www.linkedin.com/in/filipmakraduli/

Brian Goldin
Founder and CEO of Voyager Search, which began with geospatial search and expanded into documents and metadata enrichment. Voyager indexes spatial data and enriches pipelines with NLP, OCR, and AI models to detect entities like oil spills or windmills. He stressed adding spatial context (“the where”) as critical for search and highlighted Voyager’s 12 years of enterprise experience.
Connect: https://www.linkedin.com/in/brian-goldin-04170a1/

Atita Arora
Director of AI at Voyager Search, with nearly 20 years in retrieval systems, now focused on geospatial AI for Earth observation data. At Berlin Buzzwords she hosted sessions, attended talks on Lucene, GPUs, and Solr, and emphasized retrieval quality in RAG systems. She is cautiously optimistic about AI agents and values the event as both learning hub and professional reunion.
Connect: https://www.linkedin.com/in/atitaarora/

Summary In this episode of the Data Engineering Podcast Kacper Łukawski from Qdrant about integrating MCP servers with vector databases to process unstructured data. Kacper shares his experience in data engineering, from building big data pipelines in the automotive industry to leveraging large language models (LLMs) for transforming unstructured datasets into valuable assets. He discusses the challenges of building data pipelines for unstructured data and how vector databases facilitate semantic search and retrieval-augmented generation (RAG) applications. Kacper delves into the intricacies of vector storage and search, including metadata and contextual elements, and explores the evolution of vector engines beyond RAG to applications like semantic search and anomaly detection. The conversation covers the role of Model Context Protocol (MCP) servers in simplifying data integration and retrieval processes, highlighting the need for experimentation and evaluation when adopting LLMs, and offering practical advice on optimizing vector search costs and fine-tuning embedding models for improved search quality.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Kacper Łukawski about how MCP servers can be paired with vector databases to streamline processing of unstructured dataInterview IntroductionHow did you get involved in the area of data management?LLMs are enabling the derivation of useful data assets from unstructured sources. What are the challenges that teams face in building the pipelines to support that work?How has the role of vector engines grown or evolved in the past ~2 years as LLMs have gained broader adoption?Beyond its role as a store of context for agents, RAG, etc. what other applications are common for vector databaes?In the ecosystem of vector engines, what are the distinctive elements of Qdrant?How has the MCP specification simplified the work of processing unstructured data?Can you describe the toolchain and workflow involved in building a data pipeline that leverages an MCP for generating embeddings?helping data engineers gain confidence in non-deterministic workflowsbringing application/ML/data teams into collaboration for determining the impact of e.g. chunking strategies, embedding model selection, etc.What are the most interesting, innovative, or unexpected ways that you have seen MCP and Qdrant used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on vector use cases?When is MCP and/or Qdrant the wrong choice?What do you have planned for the future of MCP with Qdrant?Contact Info LinkedInTwitter/XPersonal websiteParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links QdrantKafkaApache OoziNamed Entity RecognitionGraphRAGpgvectorElasticsearchApache LuceneOpenSearchBM25Semantic SearchMCP == Model Context ProtocolAnthropic Contextualized ChunkingCohereThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

The enterprise adoption of AI agents is accelerating, but significant challenges remain in making them truly reliable and effective. While coding assistants and customer service agents are already delivering value, more complex document-based workflows require sophisticated architectures and data processing capabilities. How do you design agent systems that can handle the complexity of enterprise documents with their tables, charts, and unstructured information? What's the right balance between general reasoning capabilities and constrained architectures for specific business tasks? Should you centralize your agent infrastructure or purchase vertical solutions for each department? The answers lie in understanding the fundamental trade-offs between flexibility, reliability, and the specific needs of your organization. Jerry Liu is the CEO and Co-founder at LlamaIndex, the AI agents platform for automating document workflows. Previously, he led the ML monitoring team at Robust Intelligence, did self-driving AI research at Uber ATG, and worked on recommendation systems at Quora. In the episode, Richie and Jerry explore the readiness of AI agents for enterprise use, the challenges developers face in building these agents, the importance of document processing and data structuring, the evolving landscape of AI agent frameworks like LlamaIndex, and much more. Links Mentioned in the Show: LlamaIndexLlamaIndex Production Ready Framework For LLM AgentsTutorial: Model Context Protocol (MCP)Connect with JerryCourse: Retrieval Augmented Generation (RAG) with LangChainRelated Episode: RAG 2.0 and The New Era of RAG Agents with Douwe Kiela, CEO at Contextual AI & Adjunct Professor at Stanford UniversityRewatch RADAR AI  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

The line between generic AI capabilities and truly transformative business applications often comes down to one thing: your data. While foundation models provide impressive general intelligence, they lack the specialized knowledge needed for domain-specific tasks that drive real business value. But how do you effectively bridge this gap? What's the difference between simply fine-tuning models versus using techniques like retrieval-augmented generation? And with constantly evolving models and technologies, how do you build systems that remain adaptable while still delivering consistent results? Whether you're in retail, healthcare, or transportation, understanding how to properly enrich, annotate, and leverage your proprietary data could be the difference between an AI project that fails and one that fundamentally transforms your business. Wendy Gonzalez is the CEO — and former COO — of Sama, a company leading the way in ethical AI by delivering accurate, human-annotated data while advancing economic opportunity in underserved communities. She joined Sama in 2015 and has been central to scaling both its global operations and its mission-driven business model, which has helped over 65,000 people lift themselves out of poverty through dignified digital work. With over 20 years of experience in the tech and data space, Wendy’s held leadership roles at EY, Capgemini, and Cycle30, where she built and managed high-performing teams across complex, global environments. Her leadership style blends operational excellence with deep purpose — ensuring that innovation doesn’t come at the expense of integrity. Wendy is also a vocal advocate for inclusive AI and sustainable impact, regularly speaking on how companies can balance cutting-edge technology with real-world responsibility. Duncan Curtis is the Senior Vice President of Generative AI at Sama, where he leads the development of AI-powered tools that are shaping the future of data annotation. With a background in product leadership and machine learning, Duncan has spent his career building scalable systems that bridge cutting-edge technology with real-world impact. Before joining Sama, he led teams at companies like Google, where he worked on large-scale personalization systems, and contributed to AI product strategy across multiple sectors. At Sama, he's focused on harnessing the power of generative AI to improve quality, speed, and efficiency — all while keeping human oversight and ethical practices at the core. Duncan brings a unique perspective to the AI space: one that’s grounded in technical expertise, but always oriented toward practical solutions and responsible innovation. In the episode, Richie, Wendy, and Duncan explore the importance of using specialized data with large language models, the role of data enrichment in improving AI accuracy, the balance between automation and human oversight, the significance of responsible AI practices, and much more. Links Mentioned in the Show: SamaConnect with WendyConnect with DuncanCourse: Generative AI ConceptsRelated Episode: Creating High Quality AI Applications with Theresa Parker & Sudhi Balan, Rocket SoftwareRegister for RADAR AI New to DataCamp? Learn on the go...

Retrieval Augmented Generation (RAG) continues to be a foundational approach in AI despite claims of its demise. While some marketing narratives suggest RAG is being replaced by fine-tuning or long context windows, these technologies are actually complementary rather than competitive. But how do you build a truly effective RAG system that delivers accurate results in high-stakes environments? What separates a basic RAG implementation from an enterprise-grade solution that can handle complex queries across disparate data sources? And with the rise of AI agents, how will RAG evolve to support more dynamic reasoning capabilities? Douwe Kiela is the CEO and co-founder of Contextual AI, a company at the forefront of next-generation language model development. He also serves as an Adjunct Professor in Symbolic Systems at Stanford University, where he contributes to advancing the theoretical and practical understanding of AI systems. Before founding Contextual AI, Douwe was the Head of Research at Hugging Face, where he led groundbreaking efforts in natural language processing and machine learning. Prior to that, he was a Research Scientist and Research Lead at Meta’s FAIR (Fundamental AI Research) team, where he played a pivotal role in developing Retrieval-Augmented Generation (RAG)—a paradigm-shifting innovation in AI that combines retrieval systems with generative models for more grounded and contextually aware responses. In the episode, Richie and Douwe explore the misconceptions around the death of Retrieval Augmented Generation (RAG), the evolution to RAG 2.0, its applications in high-stakes industries, the importance of metadata and entitlements in data governance, the potential of agentic systems in enterprise settings, and much more. Links Mentioned in the Show: Contextual AIConnect with DouweCourse: Retrieval Augmented Generation (RAG) with LangChainRelated Episode: High Performance Generative AI Applications with Ram Sriharsha, CTO at PineconeRegister for RADAR AI - June 26 New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

🎙️ Future of Data and AI Podcast: Episode 06 with Robin Sutara What do Apache, Excel, Microsoft, and Databricks have in common? Robin Sutara! From being a technician for Apache helicopters to leading global data strategy at Microsoft and now Databricks, Robin Sutara’s journey is anything but ordinary. In this episode, she shares how enterprises are adopting AI in practical, secure, and responsible ways—without getting lost in the hype. We dive into how Databricks is evolving beyond the Lakehouse to power the next wave of enterprise AI—supporting custom models, Retrieval-Augmented Generation (RAG), and compound AI systems that balance innovation with governance, transparency, and risk management. Robin also breaks down the real challenges to AI adoption—not technical, but cultural. She explains why companies must invest in change management, empower non-technical teams, and embrace diverse perspectives to make AI truly work at scale. Her take on job evolution, bias in AI, and the human side of automation is both refreshing and deeply relevant. A sharp, insightful conversation for anyone building or scaling AI inside the enterprise—especially in regulated industries where trust and explainability matter as much as innovation.

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style! In this episode, host Murilo is joined by returning guest Paolo, Data Management Team Lead at dataroots, for a deep dive into the often-overlooked but rapidly evolving domain of unstructured data quality. Tune in for a field guide to navigating documents, images, and embeddings without losing your sanity. What we unpack: Data management basics: Metadata, ownership, and why Excel isn’t everything.Structured vs unstructured data: How the wild west of PDFs, images, and audio is redefining quality.Data quality challenges for LLMs: From apples and pears to rogue chatbots with “legally binding” hallucinations.Practical checks for document hygiene: Versioning, ownership, embedding similarity, and tagging strategies.Retrieval-Augmented Generation (RAG): When ChatGPT meets your HR policies and things get weird.Monitoring and governance: Building systems that flag rot before your chatbot gives out 2017 vacation rules.Tooling and gaps: Where open source is doing well—and where we’re still duct-taping workflows.Real-world inspirations: A look at how QuantumBlack (McKinsey) is tackling similar issues with their AI for DQ framework.