In this episode of Hub & Spoken, host Jason Foster welcomes Sam White, the multi-award-winning Founder of Freedom Services Group and Global Founder of Stella Insurance Australia. Sam shares her journey of building Stella Insurance, the challenges and opportunities of creating a digital-first insurance company, the importance of customer experience, and how Stella Insurance is reimagining financial services from a female perspective. Sam also discusses the impact of regulatory changes, the role of AI in the insurance industry, and the significance of diversity in business. This is a real gem of an episode, especially for entrepreneurs and business leaders interested in digital transformation, insurance innovations, and diversity in leadership. Follow Sam: linkedin.com/in/samwhiteentrepreneur/ Follow Jason: linkedin.com/in/jasonbfoster/ ***** Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023 and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024. Cynozure is a certified B Corporation.
talk-data.com
Topic
AI/ML
Artificial Intelligence/Machine Learning
9014
tagged
Activity Trend
Top Events
Explore the applications, opportunities and considerations of AI for business and unlock value and growth for your organization.
This blog, the second in a series, explores the mix of infrastructure types that support modern AI. Published at: https://www.eckerson.com/articles/cloud-on-prem-hybrid-oh-my-where-ai-adopters-host-their-projects-and-why
This is a free preview of a paid episode. To hear more, visit dataengineeringcentral.substack.com
It’s time for another episode of the Data Engineering Central Podcast. In this episode, we cover … * Rust-based tool called UV to replace pip and poetry etc * Apache X-Table and the Future of the Lake House * How is AI going to affect you? Thanks for being a consumer of Data Engineering Central; your support means a lot. Please share this podcast with your friend…
Supported by Our Partners • Swarmia — The engineering intelligence platform for modern software organizations. • Sentry — Error and performance monitoring for developers. — Why did Meta build its own internal developer tooling instead of using industry-standard solutions like GitHub? Tomas Reimers, former Meta engineer and co-founder of Graphite, joins the show to talk about Meta's custom developer tools – many of which were years ahead of the industry. From Phababricator to Sandcastle and Butterflybot, Tomas shares examples of Meta’s internal tools that transformed developer productivity at the tech giant. Why did working with stacked diffs and using monorepos become best practices at Meta? How are these practices influencing the broader industry? Why are code reviews and testing looking to become even more critical as AI transforms how we write software? We answer these, and also discuss: • Meta's custom internal developer tools • Why more tech companies are transitioning from polyrepos to monorepos • A case for different engineering constraints within the same organization • How stacked diffs solve the code review bottleneck • Graphite’s origin story and pivot to their current product • Why code reviews will become a lot more important, the more we use AI coding tools • Tomas’s favorite engineering metric • And much more! — Timestamps (00:00) Intro (02:00) An introduction to Meta’s in-house tooling (05:07) How Meta’s integrated tools work and who built the tools (10:20) An overview of the rules engine, Herald (12:20) The stages of code ownership at Facebook and code ownership at Google and GitHub (14:39) Tomas’s approach to code ownership (16:15) A case for different constraints within different parts of an organization (18:42) The problem that stacked diffs solve for (25:01) How larger companies drive innovation, and who stacking diffs not for (30:25) Monorepos vs. polyrepos and why Facebook is transitioning to a monorepo (35:31) The advantages of monorepos and why GitHub does not support them (39:55) AI’s impact on software development (42:15) The problems that AI creates, and possible solutions (45:25) How testing might change and the testing AI coding tools are already capable of (48:15) How developer accountability might be a way to solve bugs and bad AI code (53:20) Why stacking hasn’t caught on and Graphite’s work (57:10) Graphite’s origin story (1:01:20) Engineering metrics that matter (1:06:07) Learnings from building a company for developers (1:08:41) Rapid fire round (1:12:41) Closing — The Pragmatic Engineer deepdives relevant for this episode: • Stacked Diffs (and why you should know about them) • Inside Meta’s engineering culture • Shipping to production • How Uber is measuring engineering productivity — See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Está no ar, o Data Hackers News !! Os assuntos mais quentes da semana, com as principais notícias da área de Dados, IA e Tecnologia, que você também encontra na nossa Newsletter semanal, agora no Podcast do Data Hackers !! Aperte o play e ouça agora, o Data Hackers News dessa semana ! Para saber tudo sobre o que está acontecendo na área de dados, se inscreva na Newsletter semanal: https://www.datahackers.news/ Conheça nossos comentaristas do Data Hackers News: Monique Femme Paulo Vasconcellos Demais canais do Data Hackers: Site Linkedin Instagram Tik Tok You Tube
Panel: How AI Is Shifting Data Infrastructure Left | Joe Reis, Vin Vashishta, Carly Taylor, Chad Sanderson | Shift Left Data Conference 2025
The rapid rise of AI has dramatically elevated the value and strategic importance of data, transforming how upstream software engineers perceive and interact with data workflows. In this expert-led panel, industry leaders will share their experiences and insights into effectively bridging the gap between data teams and software engineers. They will discuss practical strategies for proactively managing data infrastructure, enhancing collaboration, and ensuring high-quality data to support advanced AI-driven development initiatives.
Shifting Left in Banking: Enhancing Machine Learning Models through Proactive Data Quality | Abhi Ghosh | Shift Left Data Conference 2025
Good Data and not Big Data is becoming more important in today's ecosystem. Machine Learning models rely on good quality data to make their model training more efficient and effective. We have traditionally applied Data Quality checks and balances in manual, centralized way, putting a lot of onus on our customers. Shifting Left Data Quality will bring the data quality checks closer to where data is being created, while preventing bad data from flowing downstream. Also auto-detecting, recommending and auto-enforcing data quality rules will make our customers job easier, while creating a more mature and robust data ecosystem.
Shift Left with Apache Iceberg Data Products to Power AI | Andrew Madson | Shift Left Data Conference 2025
High-quality, governed, and performant data from the outset is vital for agile, trustworthy enterprise AI systems. Traditional approaches delay addressing data quality and governance, causing inefficiencies and rework. Apache Iceberg, a modern table format for data lakes, empowers organizations to "Shift Left" by integrating data management best practices earlier in the pipeline to enable successful AI systems.
This session covers how Iceberg's schema evolution, time travel, ACID transactions, and Git-like data branching allow teams to validate, version, and optimize data at its source. Attendees will learn to create resilient, reusable data assets, streamline engineering workflows, enforce governance efficiently, and reduce late-stage transformations—accelerating analytics, machine learning, and AI initiatives.
Panel: Shift Left Across the Data Lifecycle—Data Contracts, Transformations, Observability, and C...
Panel: Shift Left Across the Data Lifecycle—Data Contracts, Transformations, Observability, and Catalogs | Prukalpa Sankar, Tristan Handy, Barr Moses, Chad Sanderson | Shift Left Data Conference 2025
Join industry-leading CEOs Chad (Data Contracts), Tristan (Data Transformations), Barr (Data Observability), and Prukalpa (Data Catalogs) who are pioneering new approaches to operationalizing data by “Shifting Left.” This engaging panel will explore how embedding rigorous data management practices early in the data lifecycle reduces issues downstream, enhances data reliability, and empowers software engineers with clear visibility into data expectations. Attendees will gain insights into how data contracts define accountability, how effective transformations ensure data usability at scale, how proactive how proactive data and AI observability drives continuous confidence in data quality, and how catalogs enable data discoverability, accelerating innovation and trust across organizations.
Panel: State of the Data And AI Market | Apoorva Pandhi, Matt Turck, Chris Riccomini, Chad Sanderson
Panel: State of the Data And AI Market | Apoorva Pandhi, Matt Turck, Chris Riccomini, Chad Sanderson | Shift Left Data Conference 2025
Artificial Intelligence is reshaping the landscape of software development, driving a fundamental shift towards empowering developers to take control earlier in the development lifecycle—known as "shift left." In this panel, venture capital leaders and industry experts will explore how emerging trends in AI and data technologies are influencing investment decisions, creating new opportunities, and transforming development workflows. Attendees will gain valuable insights into the evolving market dynamics, understand the strategic significance of shifting left in today's AI-driven world, and discover how organizations and developers can stay ahead in this rapidly changing environment.
Data DevOps applies rigorous software development practices—such as version control, automated testing, and governance—to data workflows, empowering software engineers to proactively manage data changes and address data-related issues directly within application code. By adopting a "shift left" approach with Data DevOps, SWE teams become more aware of data requirements, dependencies, and expectations early in the software development lifecycle, significantly reducing risks, improving data quality, and enhancing collaboration.
This session will provide practical strategies for integrating Data DevOps into application development, enabling teams to build more robust data products and accelerate adoption of production AI systems.
Tracking drug delivery inside cells is a challenge when the drug carrier itself is invisible. In this episode, we discuss a breakthrough in polymer science: the creation of fluorescent poly(lactic-co-glycolic acid) (PLGA) nanoparticles using a one-step, solvent-free dye-initiated polymerisation process. By covalently attaching dyes (blue, green, or red) to every PLGA chain, these nanoparticles become intrinsically fluorescent—meaning their position can be accurately tracked inside cells and tissues, without the risk of dye leakage. This study shows how these fluorescent PLGA nanoparticles behave in: Human THP-1 macrophages, where they were tracked using super-resolution microscopy.Live Caenorhabditis elegans, where their journey through the digestive tract was mapped.Drug delivery experiments, where the release of the anticancer drug doxorubicin was simultaneously tracked alongside the polymer carrier.This innovation offers a powerful new tool for researchers studying drug delivery, vaccine carriers, and polymer biodistribution. 📖 Based on the research article:"Facile Dye-Initiated Polymerization of Lactide–Glycolide Generates Highly Fluorescent Poly(lactic-co-glycolic Acid) for Enhanced Characterization of Cellular Delivery"Mohammad A. Al-Natour, Mohamed D. Yousif, Robert Cavanagh, Amjad Abouselo, Edward A. Apebende, Amir Ghaemmaghami, Dong-Hyun Kim, Jonathan W. Aylott, Vincenzo Taresco, Veeren M. Chauhan & Cameron Alexander. Published in ACS Macro Letters (2020).🔗 Read the full paper 🎧 Subscribe to the WoRM Podcast for more discoveries at the interface of polymers, drug delivery, and whole-organism research! This podcast is generated with artificial intelligence and curated by Veeren. If you’d like your publication featured on the show, please get in touch.
📩 More info: 🔗 www.veerenchauhan.com 📧 [email protected]
Despite $180 billion spent on big data tools and technologies, poor data quality remains a significant barrier for businesses, especially in achieving Generative AI goals. Published at: https://www.eckerson.com/articles/poor-data-quality-is-a-full-blown-crisis-a-2024-customer-insight-report
Jen Hawkins went from delivering pizzas to becoming a six-figure data analyst at a FAANG company in just 17 weeks. In our chat, she shares her Data Accelerator Program journey, how she used her background and new skills to stay motivated, land job offers, and eventually achieve her dream role. 💌 Join 10k+ aspiring data analysts & get my tips in your inbox weekly 👉 https://www.datacareerjumpstart.com/newsletter 🆘 Feeling stuck in your data journey? Come to my next free "How to Land Your First Data Job" training 👉 https://www.datacareerjumpstart.com/training 👩💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa 👔 Ace The Interview with Confidence 👉 https://www.datacareerjumpstart.com/interviewsimulator Jen Hawkins' Confessions of an Accidental Delivery Driver: Tableau Supply Chain Project: ⌚ TIMESTAMPS 00:00 - Introduction 00:30 - The Struggles and Turning Points 07:49 - Transitioning to a Data Analyst Role 19:46 - Life as a Data Analyst at a FAANG Company 🔗 CONNECT WITH JEN: 🤝 LinkedIn: https://www.linkedin.com/in/jeandriska/ 🔗 CONNECT WITH AVERY 🎥 YouTube Channel: https://www.youtube.com/@averysmith 🤝 LinkedIn: https://www.linkedin.com/in/averyjsmith/ 📸 Instagram: https://instagram.com/datacareerjumpstart 🎵 TikTok: https://www.tiktok.com/@verydata 💻 Website: https://www.datacareerjumpstart.com/ Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!
To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more
If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.
👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa
Vinoth Chandar (CEO at Onehouse and creator of Apache Hudi) and I chat about the creation of Apache Hudi, the future of open data lakehouses, and much more.
hudi #data #ai #datalakehouse #dataengineering
The role of data and AI engineers is more critical than ever. With organizations collecting massive amounts of data, the challenge lies in building efficient data infrastructures that can support AI systems and deliver actionable insights. But what does it take to become a successful data or AI engineer? How do you navigate the complex landscape of data tools and technologies? And what are the key skills and strategies needed to excel in this field? Deepak Goyal is a globally recognized authority in Cloud Data Engineering and AI. As the Founder & CEO of Azurelib Academy, he has built a trusted platform for advanced cloud education, empowering over 100,000 professionals and influencing data strategies across Fortune 500 companies. With over 17 years of leadership experience, Deepak has been at the forefront of designing and implementing scalable, real-world data solutions using cutting-edge technologies like Microsoft Azure, Databricks, and Generative AI. In the episode, Richie and Deepak explore the fundamentals of data engineering, the critical skills needed, the intersection with AI roles, career paths, and essential soft skills. They also discuss the hiring process, interview tips, and the importance of continuous learning in a rapidly evolving field, and much more. Links Mentioned in the Show: AzureLibAzureLib Academy Connect with DeepakGet Certified! Azure FundamentalsRelated Episode: Effective Data Engineering with Liya Aizenberg, Director of Data Engineering at AwaySign up to attend RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Summary In this episode, Mukund Sankar delves into the often painful experience of job rejection, particularly focusing on the role of Applicant Tracking Systems (ATS) in the hiring process. He shares his personal journey of building an AI-powered resume checker to better understand how these systems work and how they can impact job seekers. Through his experiences, he highlights key lessons about resume optimization, the importance of keyword alignment, and the need to present one's qualifications in a way that resonates with both machines and human recruiters. Takeaways Rejection can feel personal, even when it's not.Understanding how ATS works is crucial for job seekers.Building an AI-powered tool can provide valuable insights.Keyword alignment is essential for passing ATS filters.Formatting and structure can significantly affect ATS scoring.Tailoring your resume to the job description is key.Quantifiable achievements should be highlighted in resumes.It's important to translate your experience into machine-readable language.Feeling invisible in the job search process is common.You don't need to rewrite your resume; just tune it for success. How You can Build Your Own: https://mukundansankar.substack.com/p/will-your-resume-pass-the-ats-i-built
Medium Members : https://medium.com/towards-artificial-intelligence/will-your-resume-pass-the-ats-i-built-an-ai-app-to-find-out-a0ad9f3ce4ad
Summary In this episode of the Data Engineering Podcast Roman Gershman, CTO and founder of Dragonfly DB, explores the development and impact of high-speed in-memory databases. Roman shares his experience creating a more efficient alternative to Redis, focusing on performance gains, scalability, and cost efficiency, while addressing limitations such as high throughput and low latency scenarios. He explains how Dragonfly DB solves operational complexities for users and delves into its technical aspects, including maintaining compatibility with Redis while innovating on memory efficiency. Roman discusses the importance of cost efficiency and operational simplicity in driving adoption and shares insights on the broader ecosystem of in-memory data stores, future directions like SSD tiering and vector search capabilities, and the lessons learned from building a new database engine.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Roman Gershman about building a high-speed in-memory database and the impact of the performance gains on data applicationsInterview IntroductionHow did you get involved in the area of data management?Can you describe what DragonflyDB is and the story behind it?What is the core problem/use case that is solved by making a "faster Redis"?The other major player in the high performance key/value database space is Aerospike. What are the heuristics that an engineer should use to determine whether to use that vs. Dragonfly/Redis?Common use cases for Redis involve application caches and queueing (e.g. Celery/RQ). What are some of the other applications that you have seen Redis/Dragonfly used for, particularly in data engineering use cases?There is a piece of tribal wisdom that it takes 10 years for a database to iron out all of the kinks. At the same time, there have been substantial investments in commoditizing the underlying components of database engines. Can you describe how you approached the implementation of DragonflyDB to arive at a functional and reliable implementation?What are the architectural elements that contribute to the performance and scalability benefits of Dragonfly?How have the design and goals of the system changed since you first started working on it?For teams who migrate from Redis to Dragonfly, beyond the cost savings what are some of the ways that it changes the ways that they think about their overall system design?What are the most interesting, innovative, or unexpected ways that you have seen Dragonfly used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on DragonflyDB?When is DragonflyDB the wrong choice?What do you have planned for the future of DragonflyDB?Contact Info GitHubLinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links DragonflyDBRedisElasticacheValKeyAerospikeLaravelSidekiqCelerySeastar FrameworkShared-Nothing Architectureio_uringmidi-redisDunning-Kruger EffectRustThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Our baseline forecast incorporates sustained expansion but recession risks have become elevated – to a 40% probability – on concerns that aggressive US policies hit business and household sentiment. With the latest tariff increases set to push US core inflation above 4%ar next quarter, a household sector with a healthy balance will need to show a willingness to lower its saving rate to cushion this blow.
Speakers:
Bruce Kasman
Joseph Lupton
This podcast was recorded on 28 March 2025.
This communication is provided for information purposes only. Institutional clients please visit www.jpmm.com/research/disclosures for important disclosures. © 2025 JPMorgan Chase & Co. All rights reserved. This material or any portion hereof may not be reprinted, sold or redistributed without the written consent of J.P. Morgan. It is strictly prohibited to use or share without prior written consent from J.P. Morgan any research material received from J.P. Morgan or an authorized third-party (“J.P. Morgan Data”) in any third-party artificial intelligence (“AI”) systems or models when such J.P. Morgan Data is accessible by a third-party. It is permissible to use J.P. Morgan Data for internal business purposes only in an AI system or model that protects the confidentiality of J.P. Morgan Data so as to prevent any and all access to or use of such J.P. Morgan Data by any third-party.