talk-data.com talk-data.com

Topic

AI/ML

Artificial Intelligence/Machine Learning

data_science algorithms predictive_analytics

9014

tagged

Activity Trend

1532 peak/qtr
2020-Q1 2026-Q1

Activities

9014 activities · Newest first

Send us a text Innovating on Wall Street: Kristen McGarry on Data, AI, and Technical Sales 🎧 Tune in for an insider’s look at the technical strategies shaping the future of finance. Kristen McGarry, Principal Account Technical Lead for IBM’s Financial Services Market, returns to Making Data Simple to dive deeper into the intersection of technology and Wall Street. Based in NYC, Kristen works with the world’s largest financial institutions to drive innovation, accelerate time to value, and implement cutting-edge solutions across software, hardware, and services. In this episode, we break down the realities of technical sales, the evolving role of data science in finance, and what Wall Street is getting right (or wrong) about AI. Kristen also shares key insights on the challenges of working with financial giants and predictions for the future of tech in banking. ⏱ Episode Highlights: 📍 02:57 – An Intro to Kristen McGarry 📍 04:36 – Why IBM? 📍 09:25 – The Attraction of Data Science 📍 11:51 – A Day in the Life of an Account Technical Leader 📍 13:30 – Technical Sales versus Sales 📍 15:05 – Continuing to Innovate 📍 19:09 – Dealing with Wall Street 📍 20:17 – The Methodology 📍 22:23 – The How of Technical Sales 📍 23:05 – Continuous Learning 📍 28:03 – Management System 📍 30:34 – Wall Street Learnings 📍 32:20 – Biggest Challenge 📍 33:08 – The Data Challenge 📍 34:22 – Best Data Science Use Cases in Finance 📍 36:14 – What Do Clients Miss on AI? 📍 38:09 – Predictions LinkedIn: https://www.linkedin.com/in/kristen-mcgarry/ Website: https://www.ibm.com/

MakingDataSimple #DataScience #AIinFinance #TechSales #WallStreet #IBM #Innovation #FinancialServices #Leadership #ContinuousLearning #AI

Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Did you know that digestion in C. elegans follows a rhythmic pH cycle? In this episode, we explore how magic nanosensors uncover real-time intestinal pH oscillations inside these tiny nematodes. By mapping the gut’s acidic landscape, researchers reveal how proton pumps, digestion, and metabolism work together in a synchronised chemical dance—offering new insights for biomedicine and drug discovery.

🔍 Key Topics Covered: • How pH-sensitive nanosensors track acidity in living organisms • The real-time pH oscillations inside the C. elegans gut • The role of proton pumps and metabolism in digestion • How this discovery could impact gut health and biomedical research

📖 Based on the research article: “Mapping the Pharyngeal and Intestinal pH of Caenorhabditis elegans and Real-Time Luminal pH Oscillations Using Extended Dynamic Range pH-Sensitive Nanosensors” Veeren M. Chauhan, Gianni Orsi, Alan Brown, David I. Pritchard, Jonathan W. Aylott. Published in ACS Nano (2013). 🔗 Read it here: https://doi.org/10.1021/nn401856u

Join us as we uncover how pH-shifting nanosensors are revolutionising our understanding of digestion and metabolism!

🎧 Subscribe to the WoRM Podcast for more deep dives into frontier science!

This podcast is generated with artificial intelligence and curated by Veeren. If you’d like your publication featured on the show, please get in touch.

📩 More info: 🔗 www.veerenchauhan.com 📧 [email protected]

Over the past 1000 days, I've interviewed some of the brightest minds in the data world. And in today’s episode, you’ll hear genius career advice from 6 of my favorite female data analysts. They’ll teach you what it’s like working in data, and help you learn what it takes to actually land a data job. 💌 Join 10k+ aspiring data analysts & get my tips in your inbox weekly 👉 https://www.datacareerjumpstart.com/newsletter 🆘 Feeling stuck in your data journey? Come to my next free "How to Land Your First Data Job" training 👉 https://www.datacareerjumpstart.com/training 👩‍💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa 👔 Ace The Interview with Confidence 👉 https://www.datacareerjumpstart.com/interviewsimulator ⌚ TIMESTAMPS 00:00 - Introduction 00:45 - Sundas Khalid: Keep going, no matter what. 05:26 - Cole Knaflic: It's less about analysis and more about presentation. 11:38 - Rachael Finch: Always be networking. 23:22 - Jess Ramos: Be proactive in your job search and networking. 28:44 - Hana M.K.: Avoid Shiny Object Syndrome! 32:41 - Erin Shina: The importance of having projects. Check out the full episodes from this compilation! 1. How This High School Drop Out Became a $500k Data Analyst (Sundas Khalid) - https://datacareerpodcast.com/episode/148-how-this-high-school-drop-out-became-a-500k-data-analyst-sundas-khalid 2. Meet The Woman Who Changed Data Storytelling Forever (Cole Knaflic) - https://datacareerpodcast.com/episode/142-meet-the-woman-who-changed-data-storytelling-forever-cole-knafflic 3. How She Landed a Business Intelligence Analyst Job in Less than 100 Days (w/ Rachael Finch) - https://datacareerpodcast.com/episode/125-how-she-landed-a-business-intelligence-analyst-job-in-less-than-100-days-w-rachael-finch 4. Navigating Your Data Career Journey w/ Jess Ramos - https://datacareerpodcast.com/episode/49-navigating-your-data-career-journey-w-jess-ramos 5. Presenting for Data Analysts w/ Hana M.K. - https://datacareerpodcast.com/episode/84-presenting-for-data-analysts-w-hana-mk 6. From Music to Spreadsheets: Erin Shina’s 90-Day Transformation from Music to Financial Data Analyst - https://datacareerpodcast.com/episode/65-from-music-sheet-to-spreadsheets-erin-shinas-90-day-transformation-from-music-to-financial-data-analyst 🔗 CONNECT WITH AVERY 🎥 YouTube Channel: https://www.youtube.com/@averysmith 🤝 LinkedIn: https://www.linkedin.com/in/averyjsmith/ 📸 Instagram: https://instagram.com/datacareerjumpstart 🎵 TikTok: https://www.tiktok.com/@verydata 💻 Website: https://www.datacareerjumpstart.com/ Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

The rise of AI tools has democratized access to technology, but with it comes the responsibility to use these tools ethically. How do organizations ensure their employees are not only aware of AI's capabilities but also its risks? What does it mean to have a responsible AI strategy that is both comprehensive and adaptable to future advancements? As companies strive to align their AI initiatives with ethical standards, what are the best practices for training and upskilling teams to meet these challenges head-on? Uthman Ali is the Global Head of Responsible AI at BP and is an expert on AI ethics. As a former human rights lawyer and neuro-ethicist, he recognized how regulations were not keeping up with the pace of innovation and specialized in this emerging field. Some of his current projects include creating ethical policies/procedures for the use of robots, wearables and using AI for creativity. In the episode, Adel and Uthman explore the importance of responsible AI in organizations, the critical role of upskilling, the impact of the EU AI Act, practical implementation of AI ethics, the spectrum of AI skills needed, the future of AI governance, and much more. Links Mentioned in the Show: Report: The State of Data & AI LiteracyConnect with UthmanCourse: Responsible AI PracticesRelated Episode: Scaling AI in the Enterprise with Abhas Ricky, Chief Strategy Officer at ClouderaSign up to attend RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Andrii Yasinetsky, CTO and co-founder of Diadia Health and AI expert, joined Yuliia to share his perspective on the current AI landscape and its future implications. Currently Andrii and his team are building an AI-first healthcare platform focused on metabolic and hormonal health. Andrii talks about how AI is changing both technology stacks and business economics. We discuss what's wrong with AI claims in enterprises and why it doesn't match reality, he points out the decreasing cost of intelligence, and why the middle layer of tech jobs may disappear within five years. Andrii also shares his take on how the recent US administration change has created a "timeline split" that could dramatically accelerate AI innovation, potentially transforming the global economy.Diadia Health - https://diadiahealth.comAndrii's Linkedin - https://www.linkedin.com/in/yasinetsky/

Grokking Relational Database Design

A friendly illustrated guide to designing and implementing your first database. Grokking Relational Database Design makes the principles of designing relational databases approachable and engaging. Everything in this book is reinforced by hands-on exercises and examples. In Grokking Relational Database Design, you’ll learn how to: Query and create databases using Structured Query Language (SQL) Design databases from scratch Implement and optimize database designs Take advantage of generative AI when designing databases A well-constructed database is easy to understand, query, manage, and scale when your app needs to grow. In Grokking Relational Database Design you’ll learn the basics of relational database design including how to name fields and tables, which data to store where, how to eliminate repetition, good practices for data collection and hygiene, and much more. You won’t need a computer science degree or in-depth knowledge of programming—the book’s practical examples and down-to-earth definitions are beginner-friendly. About the Technology Almost every business uses a relational database system. Whether you’re a software developer, an analyst creating reports and dashboards, or a business user just trying to pull the latest numbers, it pays to understand how a relational database operates. This friendly, easy-to-follow book guides you from square one through the basics of relational database design. About the Book Grokking Relational Database Design introduces the core skills you need to assemble and query tables using SQL. The clear explanations, intuitive illustrations, and hands-on projects make database theory come to life, even if you can’t tell a primary key from an inner join. As you go, you’ll design, implement, and optimize a database for an e-commerce application and explore how generative AI simplifies the mundane tasks of database designs. What's Inside Define entities and their relationships Minimize anomalies and redundancy Use SQL to implement your designs Security, scalability, and performance About the Reader For self-taught programmers, software engineers, data scientists, and business data users. No previous experience with relational databases assumed. About the Authors Dr. Qiang Hao and Dr. Michail Tsikerdekis are both professors of Computer Science at Western Washington University. Quotes If anyone is looking to improve their database design skills, they can’t go wrong with this book. - Ben Brumm, DatabaseStar Goes beyond SQL syntax and explores the core principles. An invaluable resource! - William Jamir Silva, Adjust Relational database design is best done right the first time. This book is a great help to achieve that! - Maxim Volgin, KLM Provides necessary notions to design and build databases that can stand the data challenges we face. - Orlando Méndez, Experian

Summary In this episode of the Data Engineering Podcast Rajan Goyal, CEO and co-founder of Datapelago, talks about improving efficiencies in data processing by reimagining system architecture. Rajan explains the shift from hyperconverged to disaggregated and composable infrastructure, highlighting the importance of accelerated computing in modern data centers. He discusses the evolution from proprietary to open, composable stacks, emphasizing the role of open table formats and the need for a universal data processing engine, and outlines Datapelago's strategy to leverage existing frameworks like Spark and Trino while providing accelerated computing benefits.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Rajan Goyal about how to drastically improve efficiencies in data processing by re-imagining the system architectureInterview IntroductionHow did you get involved in the area of data management?Can you start by outlining the main factors that contribute to performance challenges in data lake environments?The different components of open data processing systems have evolved from different starting points with different objectives. In your experience, how has that un-planned and un-synchronized evolution of the ecosystem hindered the capabilities and adoption of open technologies?The introduction of a new cross-cutting capability (e.g. Iceberg) has typically taken a substantial amount of time to gain support across different engines and ecosystems. What do you see as the point of highest leverage to improve the capabilities of the entire stack with the least amount of co-ordination?What was the motivating insight that led you to invest in the technology that powers Datapelago?Can you describe the system design of Datapelago and how it integrates with existing data engines?The growth in the generation and application of unstructured data is a notable shift in the work being done by data teams. What are the areas of overlap in the fundamental nature of data (whether structured, semi-structured, or unstructured) that you are able to exploit to bridge the processing gap?What are the most interesting, innovative, or unexpected ways that you have seen Datapelago used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Datapelago?When is Datapelago the wrong choice?What do you have planned for the future of Datapelago?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Links DatapelagoMIPS ArchitectureARM ArchitectureAWS NitroMellanoxNvidiaVon Neumann ArchitectureTPU == Tensor Processing UnitFPGA == Field-Programmable Gate ArraySparkTrinoIcebergPodcast EpisodeDelta LakePodcast EpisodeHudiPodcast EpisodeApache GlutenIntermediate RepresentationTuring CompletenessLLVMAmdahl's LawLSTM == Long Short-Term MemoryThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

A tumultuous week in the US trade war leaves the world only with modestly higher tariffs on China but more downside risk. While tariffs on USMCA-compliant goods got pushed back (again) to April, noncompliant goods (estimated at 20% of total imports) will be tariffed at 25%. The impact of the chaos alongside the austerity measures of DOGE are likely to weigh on confidence and growth. Odds of global recession this year have jumped to 40%. Absent recession and against the backdrop of a sharp projected rise in German fiscal spending, the US’s own policy actions are likely to end this expansion’s period of US exceptionalism. 

Speakers:

Bruce Kasman

Joseph Lupton

This podcast was recorded on 7 March 2025.

This communication is provided for information purposes only. Institutional clients please visit www.jpmm.com/research/disclosures for important disclosures. © 2025 JPMorgan Chase & Co. All rights reserved. This material or any portion hereof may not be reprinted, sold or redistributed without the written consent of J.P. Morgan. It is strictly prohibited to use or share without prior written consent from J.P. Morgan any research material received from J.P. Morgan or an authorized third-party (“J.P. Morgan Data”) in any third-party artificial intelligence (“AI”) systems or models when such J.P. Morgan Data is accessible by a third-party. It is permissible to use J.P. Morgan Data for internal business purposes only in an AI system or model that protects the confidentiality of J.P. Morgan Data so as to prevent any and all access to or use of such J.P. Morgan Data by any third-party.

In this podcast episode, we talked with Adrian Brudaru about ​the past, present and future of data engineering.

About the speaker: Adrian Brudaru studied economics in Romania but soon got bored with how creative the industry was, and chose to go instead for the more factual side. He ended up in Berlin at the age of 25 and started a role as a business analyst. At the age of 30, he had enough of startups and decided to join a corporation, but quickly found out that it did not provide the challenge he wanted. As going back to startups was not a desirable option either, he decided to postpone his decision by taking freelance work and has never looked back since. Five years later, he co-founded a company in the data space to try new things. This company is also looking to release open source tools to help democratize data engineering.

0:00 Introduction to DataTalks.Club 1:05 Discussing trends in data engineering with Adrian 2:03 Adrian's background and journey into data engineering 5:04 Growth and updates on Adrian's company, DLT Hub 9:05 Challenges and specialization in data engineering today 13:00 Opportunities for data engineers entering the field 15:00 The "Modern Data Stack" and its evolution 17:25 Emerging trends: AI integration and Iceberg technology 27:40 DuckDB and the emergence of portable, cost-effective data stacks 32:14 The rise and impact of dbt in data engineering 34:08 Alternatives to dbt: SQLMesh and others 35:25 Workflow orchestration tools: Airflow, Dagster, Prefect, and GitHub Actions 37:20 Audience questions: Career focus in data roles and AI engineering overlaps 39:00 The role of semantics in data and AI workflows 41:11 Focusing on learning concepts over tools when entering the field 45:15 Transitioning from backend to data engineering: challenges and opportunities 47:48 Current state of the data engineering job market in Europe and beyond 49:05 Introduction to Apache Iceberg, Delta, and Hudi file formats 50:40 Suitability of these formats for batch and streaming workloads 52:29 Tools for streaming: Kafka, SQS, and related trends 58:07 Building AI agents and enabling intelligent data applications 59:09Closing discussion on the place of tools like DBT in the ecosystem

🔗 CONNECT WITH ADRIAN BRUDARU Linkedin -  / data-team   Website - https://adrian.brudaru.com/ 🔗 CONNECT WITH DataTalksClub Join the community - https://datatalks.club/slack.html Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/... Check other upcoming events - https://lu.ma/dtc-events LinkedIn -  /datatalks-club   Twitter -  /datatalksclub   Website - https://datatalks.club/

Get ready to dive into the world of DevOps & Cloud tech! This session will help you navigate the complex world of Cloud and DevOps with confidence. This session is ideal for new grads, career changers, and anyone feeling overwhelmed by the buzz around DevOps. We'll break down its core concepts, demystify the jargon, and explore how DevOps is essential for success in the ever-changing technology landscape, particularly in the emerging era of generative AI. A basic understanding of software development concepts is helpful, but enthusiasm to learn is most important.

Vishakha is a Senior Cloud Architect at Google Cloud Platform with over 8 years of DevOps and Cloud experience. Prior to Google, she was a DevOps engineer at AWS and a Subject Matter Expert (SME) for the IaC offering CloudFormation in the NorthAm region. She has experience in diverse domains including Financial Services, Retail, and Online Media. She primarily focuses on Infrastructure Architecture, Design & Automation (IaC), Public Cloud (AWS, GCP), Kubernetes/CNCF tools, Infrastructure Security & Compliance, CI/CD & GitOps, and MLOPS.

"What if you have a beautiful SLO Dashboard and it's all red and no one cares?" The mission of Site Reliability Engineering (SRE) is to ensure the reliability, scalability, and performance of critical systems - a goal best achieved through strong collaboration with teams across the organization. We are exploring how SRE is embedded in an organization, how it interfaces with application owners, senior management, business stakeholders and external software/hardware vendors. In all these cases the success of SRE's mission hinges on the effectiveness of the relationships.

We will use plenty of examples of what worked, what failed in our past work and why. Additionally, we will address funding challenges that can unexpectedly impact even well-established SRE teams.

Mike has built his career around driving performance and efficiency, specializing in optimizing the security, availability and speed of cloud applications, data and infrastructure. He developed the first currency program trading system for the Toronto Stock Exchange at UBS and later refined his expertise in optimizing trading systems and migrating core data to the cloud at Morgan Stanley and Transamerica. He is a founding member of the NYZH consultancy, focusing on AI and SRE. Based in Denver, Colorado, Mike is a pilot who enjoys desert racing and cycling, sharing adventures with his wife and three children.

The rise of A-B testing has transformed decision-making in tech, yet its application isn't without challenges. As professionals, how do you navigate the balance between short-term gains and long-term sustainability? What strategies can you employ to ensure your testing methods enhance rather than hinder user experience? And how do you effectively communicate the insights gained from testing to drive meaningful change within your organization? Vanessa Larco is a former partner at NEA where she led Series A and Series B investment rounds and worked with major consumer companies like DTC jewelry giant Mejuri, menopause symptom relief treatment Evernow, and home-swapping platform Kindred as well as major enterprise SaaS companies like Assembled, Orby AI, Granica AI, EvidentID, Rocket.Chat, Forethought AI. She is also a board observer at Forethought, SafeBase, Orby AI, Granica, Modyfi, and HEAVY.AI. She was a board observer at Robinhood until its IPO in 2021. Before she became an investor, she built consumer and enterprise tech herself at Microsoft, Disney, Twilio, and Box as a product leader. In the episode, Richie and Vanessa explore the evolution of A-B testing in gaming, the balance between data-driven decisions and user experience, the challenges of scaling experimentation, the pitfalls of misaligned metrics, the importance of understanding user behavior, and much more. Links Mentioned in the Show: New Enterprise AssociatesConnect with VanessaCourse: Customer Analytics and A/B Testing in PythonRelated Episode: Make Your A/B Testing More Effective and EfficientSign up to attend RADAR: Skills Edition - Vanessa will be speaking! New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

This episode is a special edition in honour of International Women's Day on 8th March. Host Jason Foster is joined by Lou Hutchins, Director of Data Culture & Literacy at Cynozure, and Rose Attridge, Strategy Advisor at Cynozure. Together, they explore gender diversity in data and AI, the importance of sponsorship and allies, and challenges in male-dominated industries. They also discuss the role of data and AI in driving change, the need for role models, early engagement, and company action.    *****    Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023 and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024. Cynozure is a certified B Corporation. 

Supported by Our Partners • WorkOS — The modern identity platform for B2B SaaS. • The Software Engineer’s Guidebook: Written by me (Gergely) – now out in audio form as well • Augment Code — AI coding assistant that pro engineering teams love — Not many people know that I have a brother: Balint Orosz. Balint is also in tech, but in many ways, is the opposite of me. While I prefer working on backend and business logic, he always thrived in designing and building UIs. While I opted to work at more established companies, he struck out on his own and started his startup, Distinction. And yet, our professional paths have crossed several times: at one point in time I accepted an offer to join Skyscanner as a Principal iOS Engineer – and as part of the negotiation, I added a clause to my contrac that I will not report directly or indirectly to the Head of Mobile: who happened to be my brother, thanks to Skyscanner acquiring his startup the same month that Skyscanner made an offer to hire me. Today, Balint is the founder and CEO of Craft, a beloved text editor known for its user-friendly interface and sleek design – an app that Apple awarded the prestigious Mac App of the Year in 2021. In our conversation, we explore how Balint approaches building opinionated software with an intense focus on user experience. We discuss the lessons he learned from his time building Distinction and working at Skyscanner that have shaped his approach to Craft and its development. In this episode, we discuss: • Balint’s first startup, Distinction, and his time working for Skyscanner after they acquired it • A case for a balanced engineering culture with both backend and frontend priorities  • Why Balint doesn’t use iOS Auto Layout • The impact of Craft being personal software on front-end and back-end development • The balance between customization and engineering fear in frontend work • The resurgence of local-first software and its role in modern computing • The value of building a physical prototype  • How Balint uses GenAI to assist with complicated coding projects  • And much more! — Timestamps (00:00) Intro (02:13) What it’s like being a UX-focused founder  (09:00) Why it was hard to gain recognition at Skyscanner  (13:12) Takeaways from Skyscanner that Balint brought to Craft  (16:50) How frameworks work and why they aren’t always a good fit (20:35) An explanation of iOS Auto Layout and its pros and cons  (23:13) Why Balint doesn’t use Auto Layout  (24:23) Why Craft has one code base  (27:46) Craft’s unique toolbar features and a behind the scenes peek at the code  (33:15) Why frontend engineers have fear around customization  (37:11) How Craft’s design system differs from most companies  (42:33) Behaviors and elements Craft uses rather than having a system for everything  (44:12) The back and frontend architecture in building personal software  (48:11) Shifting beliefs in personal computing  (50:15) The challenges faced with operating system updates  (50:48) The resurgence of local-first software (52:31) The value of opinionated software for consumers  (55:30) Why Craft’s focus is on the user’s emotional experience (56:50) The size of Craft’s engineering department and platform teams (59:20) Why Craft moves faster with smaller teams (1:01:26) Balint’s advice for frontend engineers looking to demonstrate value  (1:04:35) Balint’s breakthroughs using GenAI (1:07:50) Why Balint still writes code (1:09:44) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • The AI hackathon at Craft Docs • Engineering career paths at Big Tech and scaleups • Thriving as a Founding Engineer: lessons from the trenches • The past and future of modern backend practices — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

In this episode, we uncover a revolutionary approach to imaging the hidden mechanical properties of biological surfaces using Label-Free Brillouin Endo-Microscopy. Unlike traditional imaging techniques that rely on fluorescent labels, this method provides quantitative 3D viscoelastic mapping at the sub-micrometre scale—offering unprecedented insights into biological structures.

🔍 Key Topics Covered: • How Brillouin scattering enables real-time mechanical imaging of living tissue • First-ever 3D stiffness mapping of Caenorhabditis elegans cuticle in situ • Potential applications for non-invasive diagnostics and disease research • The future of elasticity-based biomedical imaging

📖 Based on the research article: “Label-Free Brillouin Endo-Microscopy for the Quantitative 3D Imaging of Sub-Micrometre Biology” Salvatore La Cavera III, Veeren M. Chauhan, et al. Published in Communications Biology (2024). 🔗 Read it here: https://doi.org/10.1038/s42003-024-06126-4

Join us as we explore how this breakthrough could transform biomedical imaging, mechanobiology, and in vivo diagnostics!

🎧 Subscribe to the WoRM Podcast for more deep dives into groundbreaking research!

This podcast is generated with artificial intelligence and curated by Veeren. If you’d like your publication featured on the show, please get in touch.

📩 More info: 🔗 www.veerenchauhan.com 📧 [email protected]

Hands-On APIs for AI and Data Science

Are you ready to grow your skills in AI and data science? A great place to start is learning to build and use APIs in real-world data and AI projects. API skills have become essential for AI and data science success, because they are used in a variety of ways in these fields. With this practical book, data scientists and software developers will gain hands-on experience developing and using APIs with the Python programming language and popular frameworks like FastAPI and StreamLit. As you complete the chapters in the book, you'll be creating portfolio projects that teach you how to: Design APIs that data scientists and AIs love Develop APIs using Python and FastAPI Deploy APIs using multiple cloud providers Create data science projects such as visualizations and models using APIs as a data source Access APIs using generative AI and LLMs

Are you prepared for the hidden UX taxes that AI and LLM features might be imposing on your B2B customers—without your knowledge? Are you certain that your AI product or features are truly delivering value, or are there unseen taxes that are working against your users and your product / business? In this episode, I’m delving into some of UX challenges that I think need to be addressed when implementing LLM and AI features into B2B products.

While AI seems to offer the change for significantly enhanced productivity, it also introduces a new layer of complexity for UX design. This complexity is not limited to the challenges of designing in a probabilistic medium (i.e. ML/AI), but also in being able to define what “quality” means. When the product team does not have a shared understanding of what a measurably better UX outcome means, improved sales and user adoption are less likely to follow. 

I’ll also discuss aspects of designing for AI that may be invisible on the surface. How might AI-powered products change the work of B2B users? What are some of the traps I see some startup clients and founders I advise in MIT’s Sandbox venture fund fall into?

If you’re a product leader in B2B / enterprise software and want to make sure your AI capabilities don’t end up creating more damage than value for users,  this episode will help!  

Highlights/ Skip to 

Improving your AI model accuracy improves outputs—but customers only care about outcomes (4:02) AI-driven productivity gains also put the customer’s “next problem” into their face sooner. Are you addressing the most urgent problem they now have—or used to have? (7:35) Products that win will combine AI with tastefully designed deterministic-software—because doing everything for everyone well is impossible and most models alone aren’t products (12:55) Just because your AI app or LLM feature can do ”X” doesn't mean people will want it or change their behavior (16:26) AI Agents sound great—but there is a human UX too, and it must enable trust and intervention at the right times (22:14) Not overheard from customers: “I would buy this/use this if it had AI” (26:52) Adaptive UIs sound like they’ll solve everything—but to reduce friction, they need to adapt to the person, not just the format of model outputs (30:20) Introducing AI introduces more states and scenarios that your product may need to support that may not be obvious right away (37:56)

Quotes from Today’s Episode

Product leaders have to decide how much effort and resources you should put into model improvements versus improving a user’s experience. Obviously, model quality is important in certain contexts and regulated industries, but when GenAI errors and confabulations are lower risk to the user (i.e. they create minor friction or inconveniences), the broader user experience that you facilitate might be what is actually determining the true value of your AI features or product. Model accuracy alone is not going to necessarily lead to happier users or increased adoption. ML models can be quantifiably tested for accuracy with structured tests, but because they’re easier to test for quality vs. something like UX doesn’t mean users value these improvements more. The product will stand a better chance of creating business value when it is clearly demonstrating it is improving your users’ lives. (5:25) When designing AI agents, there is still a human UX - a beneficiary - in the loop. They have an experience, whether you designed it with intention or not. How much transparency needs to be given to users when an agent does work for them? Should users be able to intervene when the AI is doing this type of work?  Handling errors is something we do in all software, but what about retraining and learning so that the future user experiences is better? Is the system learning anything while it’s going through this—and can I tell if it’s learning what I want/need it to learn? What about humans in the loop who might interact with or be affected by the work the agent is doing even if they aren’t the agent’s owner or “user”? Who’s outcomes matter here? At what cost? (22:51) Customers primarily care about things like raising or changing their status, making more money, making their job easier, saving time, etc. In fact,I believe a product marketed with GenAI may eventually signal a negative / burden on customers thanks to the inflated and unmet expectations around AI that is poorly implemented in the product UX. Don’t think it’s going to be bought just because it using  AI in a novel way. Customers aren’t sitting around wishing for “disruption” from your product; quite the opposite. AI or not, you need to make the customer the hero. Your AI will shine when it delivers an outsized UX outcome for your users (27:49) What kind of UX are you delivering right out of the box when a customer tries out your AI product or feature? Did you design it for tire kicking, playing around, and user stress testing? Or just an idealistic happy path? GenAI features inside b2b products should surface capabilities and constraints particularly around where users can create value for themselves quickly.  Natural hints and well-designed prompt nudges in LLMs for example are important to users and to your product team: because you’re setting a more realistic expectation of what’s possible with customers and helping them get to an outcome sooner. You’re also teaching them how to use your solution to get the most value—without asking them to go read a manual. (38:21)