talk-data.com talk-data.com

Topic

AI/ML

Artificial Intelligence/Machine Learning

data_science algorithms predictive_analytics

9014

tagged

Activity Trend

1532 peak/qtr
2020-Q1 2026-Q1

Activities

9014 activities · Newest first

In this episode, we climb into the world of nematode architecture — worm towers! Researchers have now captured Caenorhabditis worms forming vertical towers in nature — self-assembled living structures that help worms hitch rides and bridge gaps as a form of collective dispersal.

We explore:

First real-world evidence of towering in C. elegans and other Caenorhabditis species Lab experiments that trigger towering in controlled conditions How worms of all life stages can join towers — not just dauers Towers that grow, bend, and bridge gaps to reach new environments How touch alone can trigger towers to transfer en masse to new habitats

📖 Based on the research article: “Towering behavior and collective dispersal in Caenorhabditis nematodes” Daniela M. Perez, Ryan Greenway, Thomas Stier, Narcís Font-Massot, Assaf Pertzelan, Siyu Serena Ding Published in Current Biology (2025) 🔗 https://doi.org/10.1016/j.cub.2025.05.026

🎧 Subscribe to the WOrM Podcast for more full-organism wonders in behaviour, biomechanics, and evolution!

This podcast is generated with artificial intelligence and curated by Veeren. If you’d like your publication featured on the show, please get in touch.

📩 More info: 🔗 ⁠⁠www.veerenchauhan.com⁠⁠ 📧 [email protected]

Microsoft Fabric Analytics Engineer Associate Certification Companion: Preparation for DP-600 Microsoft Certification

As organizations increasingly leverage Microsoft Fabric to unify their data engineering, analytics, and governance strategies, the role of the Fabric Analytics Engineer has become more crucial than ever. This book equips readers with the knowledge and hands-on skills required to excel in this domain and pass the DP-600 certification exam confidently. This book covers the entire certification syllabus with clarity and depth, beginning with an overview of Microsoft Fabric. You will gain an understanding of the platform’s architecture and how it integrates with data and AI workloads to provide a unified analytics solution. You will then delve into implementing a data warehouse in Microsoft Fabric, exploring techniques to ingest, transform, and store data efficiently. Next, you will learn how to work with semantic models in Microsoft Fabric, enabling them to create intuitive, meaningful data representations for visualization and reporting. Then, you will focus on administration and governance in Microsoft Fabric, emphasizing best practices for security, compliance, and efficient management of analytics solutions. Lastly, you will find detailed practice tests and exam strategies along with supplementary materials to reinforce key concepts. After reading the book, you will have the background and capability to learn the skills and concepts necessary both to pass the DP-600 exam and become a confident Fabric Analytics Engineer. What You Will Learn A complete understanding of all DP-600 certification exam objectives and requirements Key concepts and terminology related to Microsoft Fabric Analytics Step-by-step preparation for successfully passing the DP-600 certification exam Insights into exam structure, question patterns, and strategies for tackling challenging sections Confidence in demonstrating skills validated by the Microsoft Certified: Fabric Analytics Engineer Associate credential Who This Book Is For ​​​​​​​Data engineers, analysts, and professionals with some experience in data engineering or analytics, seeking to expand their knowledge of Microsoft Fabric

In this episode, we dive into the world of job searching and explore how AI tools are revolutionizing the process. Discover practical strategies to overcome common job search challenges, from crafting the perfect resume to acing interviews. Learn how AI can enhance your job search experience, making it more efficient and effective. Whether you're a recent graduate or a seasoned professional, this episode offers valuable insights to help you land your dream job. Tune in and transform your job search journey with the power of AI!

See blog with code: https://medium.com/p/530957cefe65

podcast_episode
by Katherine Marney (Emerging Markets Economic and Policy Research) , Jahangir Aziz (Emerging Markets Economic and Policy Research)

Jahangir Aziz and Katie Marney discuss sluggish EM capital flows against a backdrop of trade uncertainty, risks to global growth, elevated treasury yields, and a weaker dollar. EM capital flows have been languishing and shifting in composition since about 2015. Hopes that a weaker US dollar would break EM capital flows out its malaise have not been fulfilled. We explore our finding that dollar’s influence as a push factor for EM investment flows has been waning, while US Treasury yields matter more. EM also needs to offer a sufficient growth pick-up to pull in flows. We also discuss China’s role as an attractor or substitute for broader EM capital flows. Greater macro stability for many EMs have also necessitated lower capital flows and enabled EM to face three big economic and funding shocks over the last 5 years.

Speakers:

Katherine Marney, Emerging Markets Economic and Policy Research

Jahangir Aziz, Emerging Markets Economic and Policy Research

This podcast was recorded on July 29, 2025.

This communication is provided for information purposes only. Institutional clients can view the related report at https://www.jpmm.com/research/content/GPS-5040188-0 for more information; please visit www.jpmm.com/research/disclosures for important disclosures. © 2025 JPMorgan Chase & Co. All rights reserved. This material or any portion hereof may not be reprinted, sold or redistributed without the written consent of J.P. Morgan. It is strictly prohibited to use or share without prior written consent from J.P. Morgan any research material received from J.P. Morgan or an authorized third-party (“J.P. Morgan Data”) in any third-party artificial intelligence (“AI”) systems or models when such J.P. Morgan Data is accessible by a third-party. It is permissible to use J.P. Morgan Data for internal business purposes only in an AI system or model that protects the confidentiality of J.P. Morgan Data so as to prevent any and all access to or use of such J.P. Morgan Data by any third-party.

Today, we’re joined by Erik Huddleston, Chief Executive Officer of Aprimo, the #1 digital asset management and content operations platform.  We talk about: Automating content creation, plus scaling upstream & downstream processes with brand safety agentsFramework for CEOs to think through how to best apply AI more generallyThe importance of role clarity: understanding the core activities that impact the financial planHow SaaS vendors can survive tech consolidation by being strategically relevant to the budget ownerThe importance of a good personal knowledge management system

The Model Context Protocol now fully embraces OAuth 2.1 conventions, bringing mature authorization patterns to AI agent ecosystems. Rather than inventing new auth mechanisms, MCP adopted proven OAuth flows, dynamic client registration, as well as the brand-new Protected Resource Metadata conventions. This session explores how the new spec significantly simplifies the developer experience for both MCP client and server implementers, as well as gives developers more flexibility around integration with existing authorization servers.

We are entering a new era of user interaction. It's being built right before our very eyes and changing rapidly. As crazy as it sounds, soon each one of us will get our own Jarvis capable of performing actually useful tasks for us with a completely different user interaction mechanism than we're used to. But someone's gotta give Jarvis the tools to perform these tasks, and that's where we come in In this talk, Kent will show how this AI assistant user interaction model is shaping out to be, help us catch the vision of what this future could look like, and our role in it.

Join Jay Parikh, Microsoft EVP of Core AI, as he opens MCP DevDays with an exciting look at how the Model Context Protocol is revolutionizing AI application development. Discover why Microsoft is all-in on MCP and how it's accelerating developer productivity across VS Code, GitHub Copilot, Azure AI Foundry, and Windows. This keynote features lightning demos showcasing real-world MCP implementations. Whether you're a developer, tool builder, or AI enthusiast, this session sets the stage for two days of hands-on learning about the protocol that's defining the next generation of intelligent

Statistics Every Programmer Needs

Put statistics into practice with Python! Data-driven decisions rely on statistics. Statistics Every Programmer Needs introduces the statistical and quantitative methods that will help you go beyond “gut feeling” for tasks like predicting stock prices or assessing quality control, with examples using the rich tools of the Python ecosystem. Statistics Every Programmer Needs will teach you how to: Apply foundational and advanced statistical techniques Build predictive models and simulations Optimize decisions under constraints Interpret and validate results with statistical rigor Implement quantitative methods using Python In this hands-on guide, stats expert Gary Sutton blends the theory behind these statistical techniques with practical Python-based applications, offering structured, reproducible, and defensible methods for tackling complex decisions. Well-annotated and reusable Python code listings illustrate each method, with examples you can follow to practice your new skills. About the Technology Whether you’re analyzing application performance metrics, creating relevant dashboards and reports, or immersing yourself in a numbers-heavy coding project, every programmer needs to know how to turn raw data into actionable insight. Statistics and quantitative analysis are the essential tools every programmer needs to clarify uncertainty, optimize outcomes, and make informed choices. About the Book Statistics Every Programmer Needs teaches you how to apply statistics to the everyday problems you’ll face as a software developer. Each chapter is a new tutorial. You’ll predict ultramarathon times using linear regression, forecast stock prices with time series models, analyze system reliability using Markov chains, and much more. The book emphasizes a balance between theory and hands-on Python implementation, with annotated code and real-world examples to ensure practical understanding and adaptability across industries. What's Inside Probability basics and distributions Random variables Regression Decision trees and random forests Time series analysis Linear programming Monte Carlo and Markov methods and much more About the Reader Examples are in Python. About the Author Gary Sutton is a business intelligence and analytics leader and the author of Statistics Slam Dunk: Statistical analysis with R on real NBA data. Quotes A well-organized tour of the statistical, machine learning and optimization tools every data science programmer needs. - Peter Bruce, Author of Statistics for Data Science and Analytics Turns statistics from a stumbling block into a superpower. Clear, relevant, and written with a coder’s mindset! - Mahima Bansod, LogicMonitor Essential! Stats and modeling with an emphasis on real-world system design. - Anupam Samanta, Google A great blend of theory and practice. - Ariel Andres, Scotia Global Asset Management

Summary In this episode of the Data Engineering Podcast Akshay Agrawal from Marimo discusses the innovative new Python notebook environment, which offers a reactive execution model, full Python integration, and built-in UI elements to enhance the interactive computing experience. He discusses the challenges of traditional Jupyter notebooks, such as hidden states and lack of interactivity, and how Marimo addresses these issues with features like reactive execution and Python-native file formats. Akshay also explores the broader landscape of programmatic notebooks, comparing Marimo to other tools like Jupyter, Streamlit, and Hex, highlighting its unique approach to creating data apps directly from notebooks and eliminating the need for separate app development. The conversation delves into the technical architecture of Marimo, its community-driven development, and future plans, including a commercial offering and enhanced AI integration, emphasizing Marimo's role in bridging the gap between data exploration and production-ready applications.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementTired of data migrations that drag on for months or even years? What if I told you there's a way to cut that timeline by up to 6x while guaranteeing accuracy? Datafold's Migration Agent is the only AI-powered solution that doesn't just translate your code; it validates every single data point to ensure perfect parity between your old and new systems. Whether you're moving from Oracle to Snowflake, migrating stored procedures to dbt, or handling complex multi-system migrations, they deliver production-ready code with a guaranteed timeline and fixed price. Stop burning budget on endless consulting hours. Visit dataengineeringpodcast.com/datafold to book a demo and see how they're turning months-long migration nightmares into week-long success stories.Your host is Tobias Macey and today I'm interviewing Akshay Agrawal about Marimo, a reusable and reproducible Python notebook environmentInterview IntroductionHow did you get involved in the area of data management?Can you describe what Marimo is and the story behind it?What are the core problems and use cases that you are focused on addressing with Marimo?What are you explicitly not trying to solve for with Marimo?Programmatic notebooks have been around for decades now. Jupyter was largely responsible for making them popular outside of academia. How have the applications of notebooks changed in recent years?What are the limitations that have been most challenging to address in production contexts?Jupyter has long had support for multi-language notebooks/notebook kernels. What is your opinion on the utility of that feature as a core concern of the notebook system?Beyond notebooks, Streamlit and Hex have become quite popular for publishing the results of notebook-style analysis. How would you characterize the feature set of Marimo for those use cases?For a typical data team that is working across data pipelines, business analytics, ML/AI engineering, etc. How do you see Marimo applied within and across those contexts?One of the common difficulties with notebooks is that they are largely a single-player experience. They may connect into a shared compute cluster for scaling up execution (e.g. Ray, Dask, etc.). How does Marimo address the situation where a data platform team wants to offer notebooks as a service to reduce the friction to getting started with analyzing data in a warehouse/lakehouse context?How are you seeing teams integrate Marimo with orchestrators (e.g. Dagster, Airflow, Prefect)?What are some of the most interesting or complex engineering challenges that you have had to address while building and evolving Marimo?\What are the most interesting, innovative, or unexpected ways that you have seen Marimo used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Marimo?When is Marimo the wrong choice?What do you have planned for the future of Marimo?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links MarimoJupyterIPythonStreamlitPodcast.init EpisodeVector EmbeddingsDimensionality ReductionKagglePytestPEP 723 script dependency metadataMatLabVisicalcMathematicaRMarkdownRShinyElixir LivebookDatabricks NotebooksPapermillPluto - Julia NotebookHexDirected Acyclic Graph (DAG)Sumble Kaggle founder Anthony Goldblum's startupRayDaskJupytextnbdevDuckDBPodcast EpisodeIcebergSupersetjupyter-marimo-proxyJupyterHubBinderNixAnyWidgetJupyter WidgetsMatplotlibAltairPlotlyDataFusionPolarsMotherDuckThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Healthcare AI is rapidly evolving beyond simple diagnostic tools to comprehensive systems that can analyze and predict patient outcomes. With the rise of multimodal AI models that can process everything from medical images to patient records and genetic information, we're entering an era where AI could fundamentally transform how healthcare decisions are made. But how do we ensure these systems maintain patient privacy while still leveraging vast amounts of medical data? What are the technical challenges in building AI that can reason across different types of medical information? And how do we balance the promise of AI-assisted healthcare with the critical role of human medical professionals? Professor Aldo Faisal is Chair in AI & Neuroscience at Imperial College London, with joint appointments in Bioengineering and Computing, and also holds the Chair in Digital Health at the University of Bayreuth. He is the Founding Director of the UKRI Centre for Doctoral Training in AI for Healthcare and leads the Brain & Behaviour Lab and Behaviour Analytics Lab at Imperial’s Data Science Institute. His research integrates machine learning, neuroscience, and human behaviour to develop AI technologies for healthcare. He is among the few engineers globally leading their own clinical trials, with work focused on digital biomarkers and AI-based medical interventions. Aldo serves as Associate Editor for Nature Scientific Data and PLOS Computational Biology, and has chaired major conferences like KDD, NIPS, and IEEE BSN. His work has earned multiple awards, including the $50,000 Toyota Mobility Foundation Prize, and is regularly featured in global media outlets. In the episode, Richie and Aldo explore the advancements in AI for healthcare, including AI's role in diagnostics and operational improvements, the ambitious Nightingale AI project, challenges in handling diverse medical data, privacy concerns, and the future of AI-assisted medical decision-making, and much more. Links Mentioned in the Show: Aldo’s PublicationsConnect with AldoProject: What is Your Heart Rate Telling You?Related Episode: Using Data to Optimize Costs in Healthcare with Travis Dalton and Jocelyn Jiang President/CEO & VP of Data & Decision Science at MultiPlanRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

As trade (hand-shake) deals get made, it is looking increasingly likely that the effective tariff rate is going to settle very close to the 22% rate initially announced on April 2. And yet, the global expansion looks resilient through 1H25. Is the shock just not that big? Is there more fundamental support for growth? Are businesses willing to smooth the shock over time? Are easy financial conditions short-circuiting the shock? Or is it just too soon, with the past flattered by front-loading and a sharp break still to come?

Speakers:

Bruce Kasman

Joseph Lupton

This podcast was recorded on 25 July 2025.

This communication is provided for information purposes only. Institutional clients please visit www.jpmm.com/research/disclosures for important disclosures. © 2025 JPMorgan Chase & Co. All rights reserved. This material or any portion hereof may not be reprinted, sold or redistributed without the written consent of J.P. Morgan. It is strictly prohibited to use or share without prior written consent from J.P. Morgan any research material received from J.P. Morgan or an authorized third-party (“J.P. Morgan Data”) in any third-party artificial intelligence (“AI”) systems or models when such J.P. Morgan Data is accessible by a third-party. It is permissible to use J.P. Morgan Data for internal business purposes only in an AI system or model that protects the confidentiality of J.P. Morgan Data so as to prevent any and all access to or use of such J.P. Morgan Data by any third-party.

Carlo Zapponi—From Data to Narrative: Automating Contextual Annotations with AI (Outlier 2025)

Carlo Zapponi—From Data to Narrative: Automating Contextual Annotations with AI (Outlier 2025)

🌟Outlier is a one-of-a-kind data visualization conference hosted by the Data Visualization Society. Outlier brings together all corners of the data visualization community, from artists to business intelligence developers, working in various tech stacks and media. Attendees stretch their creativity and learn from practitioners who they may not otherwise connect with. Learn more on the Outlier website: https://www.outlierconf.com/


📈About the Data Visualization Society: The Data Visualization Society was founded to serve as a professional home for those working across the discipline. Our mission is to connect data visualizers across tech stacks, subject areas, and experience. Advance your skills and grow your network by joining our community: https://www.datavisualizationsociety.org/

Chandni Naidu—Hands on Data Viz with AI Agents—Create Your Own Multi-agent Studio (Outlier 2025)

Chandni Naidu—Hands on: Data Viz with AI Agents—Create Your Own Multi-agent Studio (Outlier 2025)

🌟Outlier is a one-of-a-kind data visualization conference hosted by the Data Visualization Society. Outlier brings together all corners of the data visualization community, from artists to business intelligence developers, working in various tech stacks and media. Attendees stretch their creativity and learn from practitioners who they may not otherwise connect with. Learn more on the Outlier website: https://www.outlierconf.com/


📈About the Data Visualization Society: The Data Visualization Society was founded to serve as a professional home for those working across the discipline. Our mission is to connect data visualizers across tech stacks, subject areas, and experience. Advance your skills and grow your network by joining our community: https://www.datavisualizationsociety.org/

In this episode, Conor and Bryce chat about AI, how it's changing the way we work and more. Link to Episode 244 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)Socials ADSP: The Podcast: TwitterConor Hoekstra: Twitter | BlueSky | MastodonBryce Adelstein Lelbach: TwitterShow Notes Date Generated: 2025-07-01 Date Released: 2025-07-25 AI Poll ResultsAll of Conor's Vibe Coded ProjectsCursorClaude 4Vittorio's CamomillaGPU ModeADSP Episode 238: Recommended Podcast Discussions on AI & LLMsADSP Episode 239: Claude-Poisoned Dev Sipping Rocket FuelCoRecursive Episode 113: When AI Codes, What’s Left for me?My AI Skeptic Friends Are All Nuts - Thomas PtacekThePrimeTime - How WE Use AI In Software DevelopmentIntro Song Info Miss You by Sarah Jansen https://soundcloud.com/sarahjansenmusic Creative Commons — Attribution 3.0 Unported — CC BY 3.0 Free Download / Stream: http://bit.ly/l-miss-you Music promoted by Audio Library https://youtu.be/iYYxnasvfx8

As AI models become more powerful, the companies building them are facing more powerful adversaries. As AI approaches human level, we expect various risks, but it would be particularly bad if malicious actors got their hands on unprotected versions of extremely intelligent models. To prevent that, AI companies in the future will need to be secured against the strongest adversaries, which global policy think tank RAND refers to as Security Level 5 (SL5) adversaries. The SL5 Task Force team is developing plans and prototypes for how to achieve this level of security, under the assumption that we don’t have time to wait for financial incentives to align. Berlin-based AI researcher and aisafety.berlin organiser Guy will share some of his work in the Task Force and answer questions.

This session introduces the AI Gateway pattern—a central control plane for enterprise AI ecosystems. We'll explore how AI gateways solve real-world challenges through unified API abstraction, intelligent failover mechanisms, semantic caching, centralized guardrails, and granular cost controls. You'll learn practical architectural patterns for building high-availability gateways that handle thousands of concurrent requests while maintaining sub-millisecond decision-making through in-memory operations. The session covers separation of control and data planes, asynchronous logging patterns, and horizontal scaling strategies. It also discusses Model Context Protocol (MCP) integration for managing model access and tool ecosystems to enable natural language automation across enterprise software. Key takeaways include gateway design principles, performance optimization strategies, multi-provider management patterns, and a framework for evaluating AI infrastructure needs.

In this episode, we look at the real story behind transformation in data and AI, and why the classic big bang approach often fails to deliver lasting impact. Jason explores when large-scale transformation programmes do make sense, like when you're starting from a fundamentally broken place, or when disruption is the goal. But he also digs into the messy reality of what usually happens: slow delivery, rigid plans, lost trust, and a disconnect between activity and real outcomes. He then makes the case for iterative change. A more human, responsive, and sustainable way to build meaningful transformation over time. With real-world examples and sharp reflections, Jason shares how small, focused steps can create big shifts, and how to blend bold vision with practical delivery. This episode is full of insight for business and data leaders navigating change, delivering transformation, or just trying to make something actually stick. ****    Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023 and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024. Cynozure is a certified B Corporation.