talk-data.com talk-data.com

Topic

AI/ML

Artificial Intelligence/Machine Learning

data_science algorithms predictive_analytics

9014

tagged

Activity Trend

1532 peak/qtr
2020-Q1 2026-Q1

Activities

9014 activities · Newest first

Building Agentic AI Systems

In "Building Agentic AI Systems", you will explore how to design and create intelligent and autonomous AI agents that can reason, plan, and adapt. This book dives deep into the principles and practices necessary to unlock the potential of generative AI and agentic systems. From foundation to implementation, you'll gain valuable insights into cutting-edge AI architectures and functionalities. What this Book will help me do Understand the foundational concepts of generative AI and the principles of agentic systems. Develop skills to design AI agents capable of self-reflection, tool utilization, and adaptable planning. Explore strategies for ensuring ethical transparency and safety in autonomous AI systems. Learn practical techniques to build effective multi-agent AI collaborations with real-world applications. Gain insights into designing AI systems with scalability, adaptability, and minimal human intervention. Author(s) Anjanava Biswas and Wrick Talukdar are experts in AI development with years of experience working on generative AI frameworks and autonomous systems. They specialize in creating innovative AI solutions and contributing to AI best practices in the industry. Their dedication to teaching and clarity in writing make technical concepts accessible to developers at all levels. Who is it for? This book is ideal for AI developers, machine learning engineers, and software architects seeking to advance their understanding of designing and implementing intelligent autonomous AI systems. Readers should have a foundational understanding of machine learning principles and basic programming experience, particularly in Python, to follow the book effectively. Understanding of generative AI or large language models is helpful but not required. If you're aiming to build or refine your skills in agent-based AI systems and how they adapt, this book is for you.

Artificial Intelligence is transforming the world of analytics, but how much of the job is changing? In this episode, we explore the AI-powered future of analytics: real-time insights, smarter Business Intelligence (BI) tools, and how Excel is turning heads with ChatGPT integration. Will AI replace analysts or supercharge what they already do best? Ravit Jain, Founder of the Ravit Show, will dive into what's staying the same, what's evolving, and what you need to know to keep up. Discover how AI is revolutionizing analytics, from real-time insights to the future of BI tools and Excel's new ChatGPT capabilities. Stay ahead of the curve— uncover how you can thrive in the AI-powered future of analytics! What You'll Learn: How AI-powered Excel is simplifying data wrangling and reporting. What parts of an analyst's role are evolving, and what core skills will always matter. How you can future-proof your analytics career in an AI-driven world.   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   Interested in learning more from Ravit? Check out The Ravit Show!    Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

Bruce Kasman and Joe Lupton discuss how the goods sector activity boomed through 1Q but the signal to take is far less clear, particularly as sentiment looks to be souring. We maintain our recession call but also see the uniqueness of the event and note that timing is uncertain in an otherwise resilient expansion. Central banks turn a bit more cautious.

Speakers:

Bruce Kasman

Joseph Lupton

This podcast was recorded on 04/17/2025.

This communication is provided for information purposes only. Institutional clients please visit www.jpmm.com/research/disclosures for important disclosures. © 2025 JPMorgan Chase & Co. All rights reserved. This material or any portion hereof may not be reprinted, sold or redistributed without the written consent of J.P. Morgan. It is strictly prohibited to use or share without prior written consent from J.P. Morgan any research material received from J.P. Morgan or an authorized third-party (“J.P. Morgan Data”) in any third-party artificial intelligence (“AI”) systems or models when such J.P. Morgan Data is accessible by a third-party. It is permissible to use J.P. Morgan Data for internal business purposes only in an AI system or model that protects the confidentiality of J.P. Morgan Data so as to prevent any and all access to or use of such J.P. Morgan Data by any third-party.

In this episode of Hub & Spoken, Jason Foster, CEO of Cynozure, chats with Luis Mejia, VP Data, Platforms & AI at PensionBee, about how the company is transforming the pension industry through smart use of data and AI. Luis shares how a digital-first mindset is helping PensionBee enhance customer experience, manage data effectively, and fuel business growth. He dives into how AI is being used in customer service, blending tech with human touch to build trust, and why ethics and transparency matter more than ever. From marketing to customer support, this episode explores the real-world challenges and opportunities of using data and AI. Luis also looks ahead to a future where AI helps democratise data and puts power in the hands of individuals. A must-listen for data and business leaders driving change in a digital world. Research Luis mentioned in the episode: https://www.pensionbee.com/uk/press/ai-and-pensions https://www.pensionbee.com/uk/press/age-vs-ai   Follow Luis on LinkedIn Follow Jason on LinkedIn *****    Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023 and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024. Cynozure is a certified B Corporation. 

Supported by Our Partners • WorkOS — The modern identity platform for B2B SaaS. •⁠ Modal⁠ — The cloud platform for building AI applications • Vanta — Automate compliance and simplify security with Vanta. — What is it like to work at Amazon as a software engineer? Dave Anderson spent over 12 years at Amazon working closely with engineers on his teams: starting as an Engineering Manager (or, SDM in Amazon lingo) and eventually becoming a Director of Engineering. In this episode, he shares a candid look into Amazon’s engineering culture—from how promotions work to why teams often run like startups. We get into the hiring process, the role of bar raisers, the pros and cons of extreme frugality, and what it takes to succeed inside one of the world’s most operationally intense companies.  We also look at how engineering actually works day to day at Amazon—from the tools teams choose to the way they organize and deliver work.  We also discuss: • The levels at Amazon, from SDE L4 to Distinguished Engineer and VP • Why engineering managers at Amazon need to write well • The “Bar Raiser” role in Amazon interview loops  • Why Amazon doesn’t care about what programming language you use in interviews • Amazon’s oncall process • The pros and cons of Amazon’s extreme frugality  • What to do if you're getting negative performance feedback • The importance of having a strong relationship with your manager • The surprising freedom Amazon teams have to choose their own stack, tools, and ways of working – and how a team chose to use Lisp (!) • Why startups love hiring former Amazon engineers • Dave’s approach to financial independence and early retirement • And more! — Timestamps (00:00) Intro (02:08) An overview of Amazon’s levels for devs and engineering managers (07:04) How promotions work for developers at Amazon, and the scope of work at each level (12:29) Why managers feel pressure to grow their teams (13:36) A step-by-step, behind-the-scenes glimpse of the hiring process  (23:40) The wide variety of tools used at Amazon (26:27) How oncall works at Amazon (32:06) The general approach to handling outages (severity 1-5) (34:40) A story from Uber illustrating the Amazon outage mindset (37:30) How VPs assist with outages (41:38) The culture of frugality at Amazon   (47:27) Amazon’s URA target—and why it’s mostly not a big deal  (53:37) How managers handle the ‘least effective’ employees (58:58) Why other companies are also cutting lower performers (59:55) Dave’s advice for engineers struggling with performance feedback  (1:04:20) Why good managers are expected to bring talent with them to a new org (1:06:21) Why startups love former Amazon engineers (1:16:09) How Dave planned for an early retirement  (1:18:10) How a LinkedIn post turned into Scarlet Ink  — The Pragmatic Engineer deepdives relevant for this episode: • Inside Amazon’s engineering culture • A day in the life of a senior manager at Amazon • Amazon’s Operational Plan process with OP1 and OP2 — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Today, I’m talking with Natalia Andreyeva from Infor about AI / ML product management and its application to supply chain software. Natalia is a Senior Director of Product Management for the Nexus AI / ML Solution Portfolio, and she walks us through what is new, and what is not, about designing AI capabilities in B2B software. We also got into why user experience is so critical in data-driven products, and the role of design in ensuring AI produces value. During our chat, Natalia hit on the importance of really nailing down customer needs through solid discovery and the role of product leaders in this non-technical work.

We also tackled some of the trickier aspects of designing for GenAI, digital assistants, the need to keep efforts strongly grounded in value creation for customers, and how even the best ML-based predictive analytics need to consider UX and the amount of evidence that customers need to believe the recommendations. During this episode, Natalia emphasizes a huge key to her work’s success: keeping customers and users in the loop throughout the product development lifecycle.

Highlights/ Skip to

What Natalia does as a Senior Director of Product Management for Infor Nexus (1:13) Who are the people using Infor Nexus Products and what do they accomplish when using them (2:51) Breaking down who makes up Natalia's team (4:05) What role does AI play in Natalia's work? (5:32) How do designers work with Natalia's team? (7:17) The problem that had Natalia rethink the discovery process when working with AI and machine learning applications (10:28) Why Natalia isn’t worried about competitors catching up to her team's design work (14:24) How Natalia works with Infor Nexus customers to help them understand the solutions her team is building (23:07) The biggest challenges Natalia faces with building GenAI and machine learning products (27:25) Natalia’s four steps to success in building AI products and capabilities (34:53) Where you can find more from Natalia (36:49)

Quotes from Today’s Episode

“I always launch discovery with customers, in the presence of the UX specialist [our designer]. We do the interviews together, and [regardless of who is facilitating] the goal is to understand the pain points of our customers by listening to how they do their jobs today. We do a series of these interviews and we distill them into the customer needs; the problems we need to really address for the customers. And then we start thinking about how to [address these needs]. Data products are a particular challenge because it’s not always that you can easily create a UX that would allow users to realize the value they’re searching for from the solution. And even if we can deliver it, consuming that is typically a challenge, too. So, this is where [design becomes really important]. [...] What I found through the years of experience is that it’s very difficult to explain to people around you what it is that you’re building when you’re dealing with a data-driven product. Is it a dashboard? Is it a workboard? They understand the word data, but that’s not what we are creating. We are creating the actual experience for the outcome that data will deliver to them indirectly, right? So, that’s typically how we work.” - Natalia Andreyeva (7:47) “[When doing discovery for products without AI], we already have ideas for what we want to get out. We know that there is a space in the market for those solutions to come to life. We just have to understand where. For AI-driven products, it’s not only about [the user’s] understanding of the problem or the design, it is also about understanding if the data exists and if it’s feasible to build the solution to address [the user’s] problem. [Data] feasibility is an extremely important piece because it will drive the UX as well.” - Natalia Andreyeva (10:50) “When [the team] discussed the problem, it sounded like a simple calculation that needed to be created [for users]. In reality, it was an entire process of thinking of multiple people in the chain [of command] to understand whether or not a medical product was safe to be consumed. That’s the outcome we needed to produce, and when we finally did, we actually celebrated with our customers and with our designers. It was one of the most difficult things that we had to design. So why did this problem actually get solved, and why we were the ones who solved it? It’s because we took the time to understand the current user experience through [our customer] interviews. We connected the dots and translated it all into a visual solution. We would never be able to do that without the proper UX and design in that place for the data.” - Natalia Andreyeva (13:16) “Everybody is pressured to come up with a strategy [for AI] or explain how AI is being incorporated into their solutions and platform, but it is still essential for all of my peers in product management to focus on the value [we’re] creating for customers. You cannot bypass discovery. Discovery is the essential portion where you have to spend time with your customers, champions, advisors, and their leads, but especially users who are doing this [supply chain] job every single day—so we understand where the pain point really is for them, we solve that pain, and we solve it with our design team as a partner, so that solution can surface value. ” - Natalia Andreyeva (22:08) “GenAI is a new field and new technology. It’s evolving quickly, and nobody really knows how to properly adapt or drive the adoption of AI solutions. The speed of innovation [in the AI field] is a challenge for everybody. People who work on the frontlines (i.e. product, engineering teams), have to stay way ahead of the market. Meanwhile, customers who are going to be using these [AI] solutions are not going to trust the [initial] outcomes. It’s going to take some time for people to become comfortable with them. But it doesn’t mean that your solution is bad or didn’t find the market fit. It’s just not time for your [solution] yet. Educating our users on the value of the solution is also part of that challenge, and [designers] have to be very careful that solutions are accessible. Users do not adopt intimidating solutions.” - Natalia Andreyeva (27:41) “First, discovery—where we search for the problems. From my experience, [discovery] works better if you’re very structured. I always provide [a customer] with an outline of what needs to happen so it’s not a secret. Then, do the prototyping phase and keep the customer engaged so they can see the quick outcomes of those prototypes. This is where you also have to really include the feasibility of the data if you’re building an AI solution, right? [Prototyping] can be short or long, but you need to keep the customer engaged throughout that phase so they see quick outcomes. Keep on validating this conceptually, you know, on the napkin, in Figma, it doesn’t really matter; you have to keep on keeping them engaged. Then, once you validate it works and the customer likes it, then build. Don’t really go into the deep development work until you know [all of this!] When you do build, create a beta solution. It only has to work so much to prove the value. Then, run the pilot, and if it’s successful, build the MVP, then launch. It’s simple, but it is a lot of work, and you have to keep your customers really engaged through all of those phases. If something doesn’t work [along the way], try to pivot early enough so you still have a viable product at the end.” - Natalia Andreyeva (34:53)

Links

Natalia's LinkedIn

Hammerspace just made headlines with its game-changing funding round! Altimeter’s Jamin Ball and Hammerspace CEO David Flynn, join us on this episode of Data Unchained to talk about Hammerspace's go-to-market (GTM) strategy. We also discuss the future of distributed data, how Hammerspace is powering performance across multi-cloud, on-prem, and hyperscale systems with zero migration pain, and how David’s pioneering legacy in NVMe laid the groundwork for the next wave of AI-driven data architecture and why now is Hammerspace’s moment. Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US

DataUnchained #Hammerspace #AIInfrastructure #EnterpriseAI #DataArchitecture #CloudComputing #HybridCloud #MulticloudStrategy #DataStorage #AltimeterCapital #TechInnovation #FutureOfData #DataOrchestration #GlobalNamespace #GPUEngineering #AIPodcast #TechPodcast #AIInvestments #DigitalTransformation #EdgeComputing

Hosted on Acast. See acast.com/privacy for more information.

Está no ar, o Data Hackers News !! Os assuntos mais quentes da semana, com as principais notícias da área de Dados, IA e Tecnologia, que você também encontra na nossa Newsletter semanal, agora no Podcast do Data Hackers !! Aperte o play e ouça agora, o Data Hackers News dessa semana ! Para saber tudo sobre o que está acontecendo na área de dados, se inscreva na Newsletter semanal: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.datahackers.news/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Conheça nossos comentaristas do Data Hackers News: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monique Femme⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Paulo Vasconcellos Demais canais do Data Hackers: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Site⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Linkedin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Tik Tok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠You Tube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Send us a text Part 2 of 2 We’re back with Part 2 of our fascinating conversation with Phanish Puranam, Professor of Strategy and Organizational Design at INSEAD. In this episode, we explore the next frontier: how intelligent algorithms don’t just support organizations—they shape them. From AI ethics and algorithmic bureaucracy to the future of human-AI teams, universal income, and debunking myths, Phanish offers a provocative look at how organizations can—and should—adopt AI in a human-centric way.

⏱️ Chapters 00:12 – Algorithmic Bureaucracy01:23 – AI Ethics04:24 – Blockchain07:51 – AI as Team vs. Individual Agents11:59 – New Skills15:12 – Predictions & Universal Income17:20 – Adopting AI in a Human-Centric Way19:02 – AI Myths🔗 Connect with Phanish Puranam LinkedIn: linkedin.com/in/phanishpuranamWebsite: insead.edu/faculty/phanish-puranamWant to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

How does a nematode’s feeding strategy shape its gut biology and disease resistance? In this episode, we explore a comparative gut transcriptomics study of Caenorhabditis elegans and Pristionchus pacificus that reveals how changes in anatomy and lifestyle have led to major shifts in gene expression and pathogen susceptibility.

We discuss:

Why P. pacificus lacks the grinder structure and how this changes its digestion What makes their intestinal gene expression profiles so divergent, despite being close relatives The role of Hedgehog signalling and lineage-specific genes in intestinal development Surprising findings on gut pH stability despite transcriptomic divergence How these factors shape resistance to pathogens and environmental adaptation

📖 Based on the research article: “Comparative transcriptomics of the nematode gut identifies global shifts in feeding mode and pathogen susceptibility” James W. Lightfoot, Veeren M. Chauhan, Jonathan W. Aylott & Christian Rödelsperger. Published in BMC Research Notes (2016). 🔗 https://doi.org/10.1186/s13104-016-1886-9

🎧 Subscribe to the WoRM Podcast for more on nematode evolution, gut biology, and systems-level research.

This podcast is generated with artificial intelligence and curated by Veeren. If you’d like your publication featured on the show, please get in touch.

📩 More info: 🔗 ⁠www.veerenchauhan.com⁠ 📧 [email protected]

Summary In this episode, Mukund Sankar shares a personal story about how a midnight craving led him to create an app that helps him reflect on his emotions rather than just satisfy his hunger. He discusses the impact of emotional eating and how technology, specifically GPT-4, can be used to foster self-awareness and emotional health. The conversation transitions into the development of a digital therapist that encourages users to engage with their feelings and patterns of behavior, ultimately promoting personal growth and emotional well-being. Takeaways Mukund reflects on how midnight cravings can lead to deeper emotional insights.He emphasizes that cravings often stem from emotional needs rather than physical hunger.The app he created uses GPT-4 to help users articulate their feelings.Journaling and self-reflection can reduce cravings and improve emotional health.Avoiding hard conversations can lead to increased cravings and emotional distress.The app serves as a non-judgmental space for users to express their feelings.Mukund's experience highlights the importance of communication with oneself.He encourages listeners to build their own tools for emotional reflection.The conversation illustrates the intersection of technology and mental health.Mukund plans to share resources for building similar apps. Blog: https://medium.com/towards-artificial-intelligence/i-was-about-to-order-taco-bell-again-instead-i-built-an-ai-that-talks-me-down-00021d1310e3 Website: Monitor this for a mini course on how to do it yourself. https://mukundansankar.substack.com/

YOU want to break into data analytics but not sure where to start? This interactive choose-your-own-adventure episode will help you! Get ready to make real-life decisions that will shape your data career. Play now and see where your choices take you. 💌 Join 10k+ aspiring data analysts & get my tips in your inbox weekly 👉 https://www.datacareerjumpstart.com/newsletter 🆘 Feeling stuck in your data journey? Come to my next free "How to Land Your First Data Job" training 👉 https://www.datacareerjumpstart.com/training 👩‍💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa 👔 Ace The Interview with Confidence 👉 https://www.datacareerjumpstart.com/interviewsimulator ⌚ Control this audio using these timestamps: 1:54 - 1 - Data Scientist 3:48 - 2 - Data Analyst 5:42 - 3 - Python 7:36 - 4 - SQL 9:30 - 5 - Keep Learning 11:24 - 6 - Browse Some Jobs 13:18 - 7 - Move On 15:12 - 8 - Apply 17:06 - 9 - Try to Network 🔗 CONNECT WITH AVERY 🎥 YouTube Channel: https://www.youtube.com/@averysmith 🤝 LinkedIn: https://www.linkedin.com/in/averyjsmith/ 📸 Instagram: https://instagram.com/datacareerjumpstart 🎵 TikTok: https://www.tiktok.com/@verydata 💻 Website: https://www.datacareerjumpstart.com/ Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

Misconceptions about AI's capabilities and the role of data are everywhere. Many believe AI is a singular, all-knowing entity, when in reality, it's a collection of algorithms producing intelligence-like outputs. Navigating and understanding the history and evolution of AI, from its origins to today's advanced language models is crucial. How do these developments, and misconceptions, impact your daily work? Are you leveraging the right tools for your needs, or are you caught up in the allure of cutting-edge technology without considering its practical application? Andriy Burkov is the author of three widely recognized books, The Hundred-Page Machine Learning Book, The Machine Learning Engineering Book, and recently The Hundred-Page Language Models book. His books have been translated into a dozen languages and are used as textbooks in many universities worldwide. His work has impacted millions of machine learning practitioners and researchers. He holds a Ph.D. in Artificial Intelligence and is a recognized expert in machine learning and natural language processing. As a machine learning expert and leader, Andriy has successfully led dozens of production-grade AI projects in different business domains at Fujitsu and Gartner. Andriy is currently Machine Learning Lead at TalentNeuron. In the episode, Richie and Andriy explore misconceptions about AI, the evolution of AI from the 1950s, the relevance of 20th-century AI research, the role of linear algebra in AI, the resurgence of recurrent neural networks, advancements in large language model architectures, the significance of reinforcement learning, the reality of AI agents, and much more. Links Mentioned in the Show: Andriy’s books: The Hundred-page Machine Learning Book, The Hundred-page Language Models BookTalentNeuronConnect with AndriySkill Track: AI FundamentalsRelated Episode: Unlocking Humanity in the Age of AI with Faisal Hoque, Founder and CEO of SHADOKARewatch sessions from RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Neste episódio, batemos um papo sobre os impactos da inteligência artificial nas carreiras em tecnologia: o aumento de 11,8% nos salários da área, como se destacar nas redes sociais, e o futuro dos empregos com automação e IA. Recebemos Lucas Carvalho (Tech & Innovation Editor no Linkedin), para trocar ideias sobre os principais movimentos do mercado, o papel das comunidades como o Data Hackers na aceleração de carreiras, e dicas práticas pra crescer com consistência nesse novo cenário. Lembrando que você pode encontrar todos os podcasts da comunidade Data Hackers no Spotify, iTunes, Google Podcast, Castbox e muitas outras plataformas. Falamos no episódio: Lucas Carvalho - Tech & Innovation Editor no Linkedin ossa Bancada Data Hackers: Monique Femme — Head of Community Management na Data Hackers Paulo Vasconcellos - Co-founder da Data Hackers e Principal Data Scientist na Hotmart. Gabriel Lages — Co-founder da Data Hackers e Data & Analytics Sr. Director na Hotmart.

Summary In this episode of the Data Engineering Podcast Jeremy Edberg, CEO of DBOS, about durable execution and its impact on designing and implementing business logic for data systems. Jeremy explains how DBOS's serverless platform and orchestrator provide local resilience and reduce operational overhead, ensuring exactly-once execution in distributed systems through the use of the Transact library. He discusses the importance of version management in long-running workflows and how DBOS simplifies system design by reducing infrastructure needs like queues and CI pipelines, making it beneficial for data pipelines, AI workloads, and agentic AI.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Jeremy Edberg about durable execution and how it influences the design and implementation of business logicInterview IntroductionHow did you get involved in the area of data management?Can you describe what DBOS is and the story behind it?What is durable execution?What are some of the notable ways that inclusion of durable execution in an application architecture changes the ways that the rest of the application is implemented? (e.g. error handling, logic flow, etc.)Many data pipelines involve complex, multi-step workflows. How does DBOS simplify the creation and management of resilient data pipelines? How does durable execution impact the operational complexity of data management systems?One of the complexities in durable execution is managing code/data changes to workflows while existing executions are still processing. What are some of the useful patterns for addressing that challenge and how does DBOS help?Can you describe how DBOS is architected?How have the design and goals of the system changed since you first started working on it?What are the characteristics of Postgres that make it suitable for the persistence mechanism of DBOS?What are the guiding principles that you rely on to determine the boundaries between the open source and commercial elements of DBOS?What are the most interesting, innovative, or unexpected ways that you have seen DBOS used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on DBOS?When is DBOS the wrong choice?What do you have planned for the future of DBOS?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links DBOSExactly Once SemanticsTemporalSempahorePostgresDBOS TransactPython Typescript Idempotency KeysAgentic AIState MachineYugabyteDBPodcast EpisodeCockroachDBSupabaseNeonPodcast EpisodeAirflowThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Just wrapped up a whirlwind tour, giving a workshop in Atlanta and then attending Google Cloud Next. B2b nonstop action, and I'm glad to home for a bit.

While at Next, I had a conversation with another tech old timer friend. We talked about how much we're having using AI as a coding assistant. I'm having fun coming up with wild stuff and seeing if it's possible to build with code. AI's made coding fun again!

📈 This episode is brought to you by GoodData. Design and deploy custom data applications and integrate AI-assisted analytics capabilities wherever your users need them.

For more information, visit https://www.gooddata.com

podcast_episode
by Bruce Kasman (J.P. Morgan) , Jahangir Aziz (Emerging Markets Economic and Policy Research) , Joe Lupton

Bruce Kasman is joined by Joe Lupton and Jahangir Aziz to discuss the post-Liberation Day back-peddling that has led some to breathe a sigh of relief. Not us. A 10% universal tax is still a very large shock (7.5x the 2018-19 trade war) and the huge 145% tax (and rising) on China is prohibitive. You cannot stop trade between the world’s two largest economies and not expect pain everywhere. We maintain our call for a 60% likelihood of a US/global recession.

This podcast was recorded on 04/11/2025.

This communication is provided for information purposes only. Institutional clients please visit www.jpmm.com/research/disclosures for important disclosures. © 2025 JPMorgan Chase & Co. All rights reserved. This material or any portion hereof may not be reprinted, sold or redistributed without the written consent of J.P. Morgan. It is strictly prohibited to use or share without prior written consent from J.P. Morgan any research material received from J.P. Morgan or an authorized third-party (“J.P. Morgan Data”) in any third-party artificial intelligence (“AI”) systems or models when such J.P. Morgan Data is accessible by a third-party. It is permissible to use J.P. Morgan Data for internal business purposes only in an AI system or model that protects the confidentiality of J.P. Morgan Data so as to prevent any and all access to or use of such J.P. Morgan Data by any third-party.

session
by Moontae Lee (LG AI Research) , Cesar Naranjo (Moloco) , Chelsie Czop (Google Cloud) , Kshetrajna Radhaven (Shopify) , Newfel Harrat (Google Cloud) , Kasper Piskorski, PhD (Technology Innovation Institute)

AI Hypercomputer is a revolutionary system designed to make implementing AI at scale easier and more efficient. In this session, we’ll explore the key benefits of AI Hypercomputer and how it simplifies complex AI infrastructure environments. Then, learn firsthand from industry leaders Shopify, Technology Innovation Institute, Moloco, and LG AI Research on how they leverage Google Cloud’s AI solutions to drive innovation and transform their businesses.

APIs dominate the web, accounting for the majority of all internet traffic. And more AI means more APIs, because they act as an important mechanism to move data into and out of AI applications, AI agents, and large language models (LLMs). So how can you make sure all of these APIs are secure? In this session, we’ll take you through OWASP’s top 10 API and LLM security risks, and show you how to mitigate these risks using Google Cloud’s security portfolio, including Apigee, Model Armor, Cloud Armor, Google Security Operations, and Security Command Center.

session
by Kate Brea (Google Cloud) , YQ Lu (Google Cloud)
LLM

Bring your laptop and join us for an interactive demo on how to apply large language models (LLMs) from the Vertex AI Model Garden to a business use case, and learn about best practices for monitoring these models in production. We’ll go through an exercise using Colab Enterprise notebooks and learn how to use out-of-the-box tools to monitor RED (rate, error, duration) metrics, configure alerts, and monitor the rate of successful predictions in order to ensure successful use of a Vertex AI model in production.