Everyone's trying to make LLMs "accurate." But the real challenge isn't accuracy — it's context. We'll explore why traditional approaches like evals suites or synthetic question sets fall short, and how successful AI systems are built instead through compounding context over time. Hex enables a new workflow for conversational analytics that grows smarter with every interaction. With Hex's Notebook Agent and Threads, business users define the questions that matter while data teams refine, audit, and operationalize them into durable, trusted workflows. In this model, "tests" aren't written in isolation by data teams — they're defined by the business and operationalized through data workflows. The result is a living system of context — not a static set of prompts or tests — that evolves alongside your organization. Join us for a candid discussion on what's working in production AI systems, and get hands-on building context-aware analytical workflows in Hex!
talk-data.com
Topic
AI/ML
Artificial Intelligence/Machine Learning
9014
tagged
Activity Trend
Top Events
See theory turn into action in a live demo, where we take an AI agent from concept to production. Understand how we have tackled real-world deployment challenges like observability, scalability, and everything in between.
Overview of the key stages involved in designing experiments, analysing and visualising biological data. Hands-on examples from plant science, including essential tools, programming languages & libraries, and statistical methods used. Brief discussion of emerging integration of machine learning in the biological research pipeline.
Discussion on AI adoption in businesses; transition from prompts to production hindered by organizational barriers, lack of skills and vision. Emphasis on upskilling the workforce and automation, enabling non-technical users from basic prompt engineering toward integrated, iterative solutions.
Most generative AI projects look impressive in a demo but fail in the real world. This session moves beyond the hype to offer a practical, engineering-focused playbook on the architectural patterns and hard-won lessons required to take your LLM application from a cool prototype to a scalable product serving thousands of users. We'll uncover the unglamorous but essential truths about observability, routing, and a production-first mindset.
Modern AI systems are deployed globally, across cultures and in hundreds of languages, yet most safety research and evaluation remains English-centric. In this talk, we will outline a pragmatic roadmap for scaling safety beyond a single linguistic or cultural frame. We will first outline AI safety as a full-stack technical discipline spanning robustness, alignment, privacy, misuse resistance, and critically, evaluation. We will then argue that harm is not universal: what counts as harmful varies with local norms and histories. Drawing on evidence from multilingual red-teaming and jailbreak studies, we will show higher failure rates in low-resource languages and the limits of translate-and-test approaches. We will introduce a global-vs-local harm lens, address data scarcity and long-tail challenges, and present actionable mitigations. Finally, we will examine fairness in model evaluation and close with concrete recommendations for building culturally aware benchmarks and auditing multilingual safety so models are not only capable, but reliably aligned with the communities they serve.
What if your job hunt could run like a data system? In this episode, I share the story of how I used three AI agents — Researcher, Writer, and Reviewer — to rebuild my job search from the ground up. These agents read job descriptions, tailor resumes, and even critique tone and clarity — saving hours every week. But this episode isn’t just about automation. It’s about agency. I’ll talk about rejection, burnout, and the mindset shift that changed everything: treating every rejection as a data point, not a defeat. Whether you’re in tech, analytics, or just tired of the job search grind — this one’s for you. 🔹 Learn how I automated resume tailoring with GPT-4 🔹 Understand how to design AI systems that protect your mental energy 🔹 Discover why “efficiency” means doing less of what drains you 🔹 Hear the emotional story behind building these agents from scratch Join the Discussion (comments hub): https://mukundansankar.substack.com/notesTools I use for my Podcast and Affiliate PartnersRecording Partner: Riverside → Sign up here (affiliate)Host Your Podcast: RSS.com (affiliate )Research Tools: Sider.ai (affiliate)Sourcetable AI: Join Here(affiliate)🔗 Connect with Me:Free Email NewsletterWebsite: Data & AI with MukundanGitHub: https://github.com/mukund14Twitter/X: @sankarmukund475LinkedIn: Mukundan SankarYouTube: Subscribe
Help us become the #1 Data Podcast by leaving a rating & review! We are 67 reviews away! I wouldn't try to become a data analyst next here. Here's 4 reasons why and what I'd do instead. 👩💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa ⌚ TIMESTAMPS 00:32 - Reason 1 not to be data scientist 03:22 - Reason 2 not to be data scientist 04:55 - Reason 3 not to be data scientist 07:33 - Reason 4 not to be data scientist 11:28 - What to do instead 🍿 OTHER EPISODES MENTIONED Data Analyst Roadmap: https://datacareerpodcast.com/episode/136-how-i-would-become-a-data-analyst-in-2025-if-i-had-to-start-over-again Get Paid to Learn Data: https://datacareerpodcast.com/episode/137-get-paid-1000s-to-master-data-analytics-skills-in-2025 Get You Master's Paid For (Thomas): https://datacareerpodcast.com/episode/128-meet-the-math-teacher-who-landed-a-data-job-in-60-days-thomas-gresco Get You Master's Paid For (Rachael): https://datacareerpodcast.com/episode/125-how-she-landed-a-business-intelligence-analyst-job-in-less-than-100-days-w-rachael-finch My review of Georgia Tech's Master's: https://datacareerpodcast.com/episode/38-masters-in-data-analytics-from-georgia-tech-is-it-worth-it 💌 Join 30k+ aspiring data analysts & get my tips in your inbox weekly 👉 https://www.datacareerjumpstart.com/newsletter 🆘 Feeling stuck in your data journey? Come to my next free "How to Land Your First Data Job" training 👉 https://www.datacareerjumpstart.com/training 👩💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa 👔 Ace The Interview with Confidence 👉 https://www.datacareerjumpstart.com//interviewsimulator 🔗 CONNECT WITH AVERY 🎥 YouTube Channel 🤝 LinkedIn 📸 Instagram 🎵 TikTok 💻 Website Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!
To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more
If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.
👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa
Matt Turck (VC at FirstMark) joins the show to break down the most controversial MAD (Machine Learning, AI, and Data) Landscape yet. This year, the team "declared bankruptcy" and cut over 1,000 logos to better reflect the market reality: a "Cambrian explosion" of AI companies and a fierce "struggle and tension between the very large companies and the startups".
Matt discusses why incumbents are "absolutely not lazy" , which categories have "largely just gone away" (like Customer Data Platforms and Reverse ETL) , and what new categories (like AI Agents and Local AI) are emerging. We also cover his investment thesis in a world dominated by foundation models, the "very underestimated" European AI scene , and whether an AI could win a Nobel Prize by 2027.
https://www.mattturck.com/mad2025
Master the cutting-edge field of computer vision and artificial intelligence with this accessible guide to the applications of machine learning and deep learning for real-world solutions in robotics, healthcare, and autonomous systems. Applied Computer Vision through Artificial Intelligence provides a thorough and accessible exploration of how machine learning and deep learning are driving breakthroughs in computer vision. This book brings together contributions from leading experts to present state-of-the-art techniques, tools, and frameworks, while demonstrating this technology’s applications in healthcare, autonomous systems, surveillance, robotics, and other real-world domains. By blending theory with hands-on insights, this volume equips readers with the knowledge needed to understand, design, and implement AI-powered vision solutions. Structured to serve both academic and professional audiences, the book not only covers cutting-edge algorithms and methodologies but also addresses pressing challenges, ethical considerations, and future research directions. It serves as a comprehensive reference for researchers, engineers, practitioners, and graduate students, making it an indispensable resource for anyone looking to apply artificial intelligence to solve complex computer vision problems in today’s data-driven world.
Data quality and AI reliability are two sides of the same coin in today's technology landscape. Organizations rushing to implement AI solutions often discover that their underlying data infrastructure isn't prepared for these new demands. But what specific data quality controls are needed to support successful AI implementations? How do you monitor unstructured data that feeds into your AI systems? When hallucinations occur, is it really the model at fault, or is your data the true culprit? Understanding the relationship between data quality and AI performance is becoming essential knowledge for professionals looking to build trustworthy AI systems. Shane Murray is a seasoned data and analytics executive with extensive experience leading digital transformation and data strategy across global media and technology organizations. He currently serves as Senior Vice President of Digital Platform Analytics at Versant Media, where he oversees the development and optimization of analytics capabilities that drive audience engagement and business growth. In addition to his corporate leadership role, he is a founding member of InvestInData, an angel investor collective of data leaders supporting early-stage startups advancing innovation in data and AI. Prior to joining Versant Media, Shane spent over three years at Monte Carlo, where he helped shape AI product strategy and customer success initiatives as Field CTO. Earlier, he spent nearly a decade at The New York Times, culminating as SVP of Data & Insights, where he was instrumental in scaling the company’s data platforms and analytics functions during its digital transformation. His earlier career includes senior analytics roles at Accenture Interactive, Memetrics, and Woolcott Research. Based in New York, Shane continues to be an active voice in the data community, blending strategic vision with deep technical expertise to advance the role of data in modern business. In the episode, Richie and Shane explore AI disasters and success stories, the concept of being AI-ready, essential roles and skills for AI projects, data quality's impact on AI, and much more. Links Mentioned in the Show: Versant MediaConnect with ShaneCourse: Responsible AI PracticesRelated Episode: Scaling Data Quality in the Age of Generative AI with Barr Moses, CEO of Monte Carlo Data, Prukalpa Sankar, Cofounder at Atlan, and George Fraser, CEO at FivetranRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Master the next frontier of technology with this book, which provides an in-depth guide to adaptive artificial intelligence and its ability to create flexible, self-governed systems in dynamic industries. Adaptive artificial intelligence represents a significant advancement in the development of AI systems, particularly within various industries that require robust, flexible, and responsive technologies. Unlike traditional AI, which operates based on pre-defined models and static data, adaptive AI is designed to learn and evolve in real time, making it particularly valuable in dynamic and unpredictable environments. This capability is increasingly important in disciplines such as autonomous systems, healthcare, finance, and industrial automation, where the ability to adapt to new information and changing conditions is crucial. In industry development, adaptive AI drives innovation by enabling systems that can continuously improve their performance and decision-making processes without the need for constant human intervention. This leads to more efficient operations, reduced downtime, and enhanced outcomes across sectors. As industries increasingly rely on AI for critical functions, the adaptive capability of these systems becomes a cornerstone for achieving higher levels of automation, reliability, and intelligence in technological solutions. Readers will find the book: Introduces the emerging concept of adaptive artificial intelligence; Explores the many applications of adaptive artificial intelligence across various industries; Provides comprehensive coverage of reinforcement learning for different domains. Audience Research scholars, IT professionals, engineering students, network administrators, artificial intelligence and deep learning experts, and government research agencies looking to innovate with the power of artificial intelligence.
Summary In this episode of the Data Engineering Podcast Omri Lifshitz (CTO) and Ido Bronstein (CEO) of Upriver talk about the growing gap between AI's demand for high-quality data and organizations' current data practices. They discuss why AI accelerates both the supply and demand sides of data, highlighting that the bottleneck lies in the "middle layer" of curation, semantics, and serving. Omri and Ido outline a three-part framework for making data usable by LLMs and agents: collect, curate, serve, and share challenges of scaling from POCs to production, including compounding error rates and reliability concerns. They also explore organizational shifts, patterns for managing context windows, pragmatic views on schema choices, and Upriver's approach to building autonomous data workflows using determinism and LLMs at the right boundaries. The conversation concludes with a look ahead to AI-first data platforms where engineers supervise business semantics while automation stitches technical details end-to-end.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData teams everywhere face the same problem: they're forcing ML models, streaming data, and real-time processing through orchestration tools built for simple ETL. The result? Inflexible infrastructure that can't adapt to different workloads. That's why Cash App and Cisco rely on Prefect. Cash App's fraud detection team got what they needed - flexible compute options, isolated environments for custom packages, and seamless data exchange between workflows. Each model runs on the right infrastructure, whether that's high-memory machines or distributed compute. Orchestration is the foundation that determines whether your data team ships or struggles. ETL, ML model training, AI Engineering, Streaming - Prefect runs it all from ingestion to activation in one platform. Whoop and 1Password also trust Prefect for their data operations. If these industry leaders use Prefect for critical workflows, see what it can do for you at dataengineeringpodcast.com/prefect.Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Composable data infrastructure is great, until you spend all of your time gluing it together. Bruin is an open source framework, driven from the command line, that makes integration a breeze. Write Python and SQL to handle the business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. Bruin allows you to build end-to-end data workflows using AI, has connectors for hundreds of platforms, and helps data teams deliver faster. Teams that use Bruin need less engineering effort to process data and benefit from a fully integrated data platform. Go to dataengineeringpodcast.com/bruin today to get started. And for dbt Cloud customers, they'll give you $1,000 credit to migrate to Bruin Cloud.Your host is Tobias Macey and today I'm interviewing Omri Lifshitz and Ido Bronstein about the challenges of keeping up with the demand for data when supporting AI systemsInterview IntroductionHow did you get involved in the area of data management?We're here to talk about "The Growing Gap Between Data & AI". From your perspective, what is this gap, and why do you think it's widening so rapidly right now?How does this gap relate to the founding story of Upriver? What problems were you and your co-founders experiencing that led you to build this?The core premise of new AI tools, from RAG pipelines to LLM agents, is that they are only as good as the data they're given. How does this "garbage in, garbage out" problem change when the "in" is not a static file but a complex, high-velocity, and constantly changing data pipeline?Upriver is described as an "intelligent agent system" and an "autonomous data engineer." This is a fascinating "AI to solve for AI" approach. Can you describe this agent-based architecture and how it specifically works to bridge that data-AI gap?Your website mentions a "Data Context Layer" that turns "tribal knowledge" into a "machine-usable mode." This sounds critical for AI. How do you capture that context, and how does it make data "AI-ready" in a way that a traditional data catalog or quality tool doesn't?What are the most innovative or unexpected ways you've seen companies trying to make their data "AI-ready"? And where are the biggest points of failure you observe?What has been the most challenging or unexpected lesson you've learned while building an AI system (Upriver) that is designed to fix the data foundation for other AI systems?When is an autonomous, agent-based approach not the right solution for a team's data quality problems? What organizational or technical maturity is required to even start closing this data-AI gap?What do you have planned for the future of Upriver? And looking more broadly, how do you see this gap between data and AI evolving over the next few years?Contact Info Ido - LinkedInOmri - LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links UpriverRAG == Retrieval Augmented GenerationAI Engineering Podcast EpisodeAI AgentContext WindowModel Finetuning)The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
After two weeks off, the Weekender returns with an exploration of the upside and downside risks to the growth outlook and the implications of each for inflation and central bank behavior. We also discuss the outcome of the Trump Asia tour.
Speakers:
Bruce Kasman
Joseph Lupton
This podcast was recorded on 31 October 2025.
This communication is provided for information purposes only. Institutional clients please visit www.jpmm.com/research/disclosures for important disclosures. © 2025 JPMorgan Chase & Co. All rights reserved. This material or any portion hereof may not be reprinted, sold or redistributed without the written consent of J.P. Morgan. It is strictly prohibited to use or share without prior written consent from J.P. Morgan any research material received from J.P. Morgan or an authorized third-party (“J.P. Morgan Data”) in any third-party artificial intelligence (“AI”) systems or models when such J.P. Morgan Data is accessible by a third-party. It is permissible to use J.P. Morgan Data for internal business purposes only in an AI system or model that protects the confidentiality of J.P. Morgan Data so as to prevent any and all access to or use of such J.P. Morgan Data by any third-party.
Fellow Moody's colleague Chris Lafakis joins Mark, Marisa, and Cris as they discuss current economic trends and Chris's recent study on the macroeconomic consequences of hurricanes. Mark starts the conversation by sharing his questions about the latest data on layoffs and how AI is influencing the economy. The team members share their different perspectives before shifting the discussion to the economic toll of Hurricane Melissa and how storms can affect regional economies. Guest: Chris Lafakis – Director of Economic Research, Moody's Analytics For Chris's research on the hurricanes and their economics impacts, click here: https://www.economy.com/the-macroeconomic-consequences-of-a-category-5-miami-hurricane Hosts: Mark Zandi – Chief Economist, Moody’s Analytics, Cris deRitis – Deputy Chief Economist, Moody’s Analytics, and Marisa DiNatale – Senior Director - Head of Global Forecasting, Moody’s Analytics Follow Mark Zandi on 'X' and BlueSky @MarkZandi, Cris deRitis on LinkedIn, and Marisa DiNatale on LinkedIn
Questions or Comments, please email us at [email protected]. We would love to hear from you. To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The AI landscape is evolving beyond gigantic models like GPT-4 towards a new generation of small, smart, and specialised models that can run privately, securely and efficiently on everyday devices. In this talk, Mehmood explores how these compact models, trained on domain-specific data, deliver powerful performance while reducing energy costs, improving privacy, and removing the need for constant cloud access. From customer service chatbots that understand regional dialects to intelligent on-device assistants in healthcare and retail, discover how small AI is making intelligence more sustainable, secure, and accessible for businesses of all sizes.
Live demonstration of GitMesh's AI-driven issue triage and automatic contributor matching.
Live demonstration of how GitMesh uses AI-driven triage to prioritize issues and match contributors to tasks based on skills.
The origin story behind GitMesh and the motivations for building an AI-powered project management solution for open source.