talk-data.com talk-data.com

Topic

GenAI

Generative AI

ai machine_learning llm

1517

tagged

Activity Trend

192 peak/qtr
2020-Q1 2026-Q1

Activities

1517 activities · Newest first

In the retail industry, data science is not just about crunching numbers—it's about driving business impact through well-designed experiments. A-B testing in a physical store setting presents unique challenges that require careful planning and execution. How do you balance the need for statistical rigor with the practicalities of store operations? What role does data science play in ensuring that test results lead to actionable insights?  Philipp Paraguya is the Chapter Lead for Data Science at Aldi DX. Previously, Philipp studied applied mathematics and computer science and has worked as a BI and advanced analytics consultant in various industries and projects since graduating. Due to his background as a software developer, he has a strong connection to classic software engineering and the sensible use of data science solutions. In the episode, Adel and Philipp explore the intricacies of A-B testing in retail, the challenges of running experiments in brick-and-mortar settings, aligning stakeholders for successful experimentation, the evolving role of data scientists, the impact of genAI on data workflows, and much more. Links Mentioned in the Show: Aldi DXConnect with PhilippCourse: Customer Analytics and A/B Testing in PythonRelated Episode: Can You Use AI-Driven Pricing Ethically? with Jose Mendoza, Academic Director & Clinical Associate Professor at NYUSign up to attend RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Grokking Relational Database Design

A friendly illustrated guide to designing and implementing your first database. Grokking Relational Database Design makes the principles of designing relational databases approachable and engaging. Everything in this book is reinforced by hands-on exercises and examples. In Grokking Relational Database Design, you’ll learn how to: Query and create databases using Structured Query Language (SQL) Design databases from scratch Implement and optimize database designs Take advantage of generative AI when designing databases A well-constructed database is easy to understand, query, manage, and scale when your app needs to grow. In Grokking Relational Database Design you’ll learn the basics of relational database design including how to name fields and tables, which data to store where, how to eliminate repetition, good practices for data collection and hygiene, and much more. You won’t need a computer science degree or in-depth knowledge of programming—the book’s practical examples and down-to-earth definitions are beginner-friendly. About the Technology Almost every business uses a relational database system. Whether you’re a software developer, an analyst creating reports and dashboards, or a business user just trying to pull the latest numbers, it pays to understand how a relational database operates. This friendly, easy-to-follow book guides you from square one through the basics of relational database design. About the Book Grokking Relational Database Design introduces the core skills you need to assemble and query tables using SQL. The clear explanations, intuitive illustrations, and hands-on projects make database theory come to life, even if you can’t tell a primary key from an inner join. As you go, you’ll design, implement, and optimize a database for an e-commerce application and explore how generative AI simplifies the mundane tasks of database designs. What's Inside Define entities and their relationships Minimize anomalies and redundancy Use SQL to implement your designs Security, scalability, and performance About the Reader For self-taught programmers, software engineers, data scientists, and business data users. No previous experience with relational databases assumed. About the Authors Dr. Qiang Hao and Dr. Michail Tsikerdekis are both professors of Computer Science at Western Washington University. Quotes If anyone is looking to improve their database design skills, they can’t go wrong with this book. - Ben Brumm, DatabaseStar Goes beyond SQL syntax and explores the core principles. An invaluable resource! - William Jamir Silva, Adjust Relational database design is best done right the first time. This book is a great help to achieve that! - Maxim Volgin, KLM Provides necessary notions to design and build databases that can stand the data challenges we face. - Orlando Méndez, Experian

Get ready to dive into the world of DevOps & Cloud tech! This session will help you navigate the complex world of Cloud and DevOps with confidence. This session is ideal for new grads, career changers, and anyone feeling overwhelmed by the buzz around DevOps. We'll break down its core concepts, demystify the jargon, and explore how DevOps is essential for success in the ever-changing technology landscape, particularly in the emerging era of generative AI. A basic understanding of software development concepts is helpful, but enthusiasm to learn is most important.

Vishakha is a Senior Cloud Architect at Google Cloud Platform with over 8 years of DevOps and Cloud experience. Prior to Google, she was a DevOps engineer at AWS and a Subject Matter Expert (SME) for the IaC offering CloudFormation in the NorthAm region. She has experience in diverse domains including Financial Services, Retail, and Online Media. She primarily focuses on Infrastructure Architecture, Design & Automation (IaC), Public Cloud (AWS, GCP), Kubernetes/CNCF tools, Infrastructure Security & Compliance, CI/CD & GitOps, and MLOPS.

Supported by Our Partners • WorkOS — The modern identity platform for B2B SaaS. • The Software Engineer’s Guidebook: Written by me (Gergely) – now out in audio form as well • Augment Code — AI coding assistant that pro engineering teams love — Not many people know that I have a brother: Balint Orosz. Balint is also in tech, but in many ways, is the opposite of me. While I prefer working on backend and business logic, he always thrived in designing and building UIs. While I opted to work at more established companies, he struck out on his own and started his startup, Distinction. And yet, our professional paths have crossed several times: at one point in time I accepted an offer to join Skyscanner as a Principal iOS Engineer – and as part of the negotiation, I added a clause to my contrac that I will not report directly or indirectly to the Head of Mobile: who happened to be my brother, thanks to Skyscanner acquiring his startup the same month that Skyscanner made an offer to hire me. Today, Balint is the founder and CEO of Craft, a beloved text editor known for its user-friendly interface and sleek design – an app that Apple awarded the prestigious Mac App of the Year in 2021. In our conversation, we explore how Balint approaches building opinionated software with an intense focus on user experience. We discuss the lessons he learned from his time building Distinction and working at Skyscanner that have shaped his approach to Craft and its development. In this episode, we discuss: • Balint’s first startup, Distinction, and his time working for Skyscanner after they acquired it • A case for a balanced engineering culture with both backend and frontend priorities  • Why Balint doesn’t use iOS Auto Layout • The impact of Craft being personal software on front-end and back-end development • The balance between customization and engineering fear in frontend work • The resurgence of local-first software and its role in modern computing • The value of building a physical prototype  • How Balint uses GenAI to assist with complicated coding projects  • And much more! — Timestamps (00:00) Intro (02:13) What it’s like being a UX-focused founder  (09:00) Why it was hard to gain recognition at Skyscanner  (13:12) Takeaways from Skyscanner that Balint brought to Craft  (16:50) How frameworks work and why they aren’t always a good fit (20:35) An explanation of iOS Auto Layout and its pros and cons  (23:13) Why Balint doesn’t use Auto Layout  (24:23) Why Craft has one code base  (27:46) Craft’s unique toolbar features and a behind the scenes peek at the code  (33:15) Why frontend engineers have fear around customization  (37:11) How Craft’s design system differs from most companies  (42:33) Behaviors and elements Craft uses rather than having a system for everything  (44:12) The back and frontend architecture in building personal software  (48:11) Shifting beliefs in personal computing  (50:15) The challenges faced with operating system updates  (50:48) The resurgence of local-first software (52:31) The value of opinionated software for consumers  (55:30) Why Craft’s focus is on the user’s emotional experience (56:50) The size of Craft’s engineering department and platform teams (59:20) Why Craft moves faster with smaller teams (1:01:26) Balint’s advice for frontend engineers looking to demonstrate value  (1:04:35) Balint’s breakthroughs using GenAI (1:07:50) Why Balint still writes code (1:09:44) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • The AI hackathon at Craft Docs • Engineering career paths at Big Tech and scaleups • Thriving as a Founding Engineer: lessons from the trenches • The past and future of modern backend practices — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Hands-On APIs for AI and Data Science

Are you ready to grow your skills in AI and data science? A great place to start is learning to build and use APIs in real-world data and AI projects. API skills have become essential for AI and data science success, because they are used in a variety of ways in these fields. With this practical book, data scientists and software developers will gain hands-on experience developing and using APIs with the Python programming language and popular frameworks like FastAPI and StreamLit. As you complete the chapters in the book, you'll be creating portfolio projects that teach you how to: Design APIs that data scientists and AIs love Develop APIs using Python and FastAPI Deploy APIs using multiple cloud providers Create data science projects such as visualizations and models using APIs as a data source Access APIs using generative AI and LLMs

Are you prepared for the hidden UX taxes that AI and LLM features might be imposing on your B2B customers—without your knowledge? Are you certain that your AI product or features are truly delivering value, or are there unseen taxes that are working against your users and your product / business? In this episode, I’m delving into some of UX challenges that I think need to be addressed when implementing LLM and AI features into B2B products.

While AI seems to offer the change for significantly enhanced productivity, it also introduces a new layer of complexity for UX design. This complexity is not limited to the challenges of designing in a probabilistic medium (i.e. ML/AI), but also in being able to define what “quality” means. When the product team does not have a shared understanding of what a measurably better UX outcome means, improved sales and user adoption are less likely to follow. 

I’ll also discuss aspects of designing for AI that may be invisible on the surface. How might AI-powered products change the work of B2B users? What are some of the traps I see some startup clients and founders I advise in MIT’s Sandbox venture fund fall into?

If you’re a product leader in B2B / enterprise software and want to make sure your AI capabilities don’t end up creating more damage than value for users,  this episode will help!  

Highlights/ Skip to 

Improving your AI model accuracy improves outputs—but customers only care about outcomes (4:02) AI-driven productivity gains also put the customer’s “next problem” into their face sooner. Are you addressing the most urgent problem they now have—or used to have? (7:35) Products that win will combine AI with tastefully designed deterministic-software—because doing everything for everyone well is impossible and most models alone aren’t products (12:55) Just because your AI app or LLM feature can do ”X” doesn't mean people will want it or change their behavior (16:26) AI Agents sound great—but there is a human UX too, and it must enable trust and intervention at the right times (22:14) Not overheard from customers: “I would buy this/use this if it had AI” (26:52) Adaptive UIs sound like they’ll solve everything—but to reduce friction, they need to adapt to the person, not just the format of model outputs (30:20) Introducing AI introduces more states and scenarios that your product may need to support that may not be obvious right away (37:56)

Quotes from Today’s Episode

Product leaders have to decide how much effort and resources you should put into model improvements versus improving a user’s experience. Obviously, model quality is important in certain contexts and regulated industries, but when GenAI errors and confabulations are lower risk to the user (i.e. they create minor friction or inconveniences), the broader user experience that you facilitate might be what is actually determining the true value of your AI features or product. Model accuracy alone is not going to necessarily lead to happier users or increased adoption. ML models can be quantifiably tested for accuracy with structured tests, but because they’re easier to test for quality vs. something like UX doesn’t mean users value these improvements more. The product will stand a better chance of creating business value when it is clearly demonstrating it is improving your users’ lives. (5:25) When designing AI agents, there is still a human UX - a beneficiary - in the loop. They have an experience, whether you designed it with intention or not. How much transparency needs to be given to users when an agent does work for them? Should users be able to intervene when the AI is doing this type of work?  Handling errors is something we do in all software, but what about retraining and learning so that the future user experiences is better? Is the system learning anything while it’s going through this—and can I tell if it’s learning what I want/need it to learn? What about humans in the loop who might interact with or be affected by the work the agent is doing even if they aren’t the agent’s owner or “user”? Who’s outcomes matter here? At what cost? (22:51) Customers primarily care about things like raising or changing their status, making more money, making their job easier, saving time, etc. In fact,I believe a product marketed with GenAI may eventually signal a negative / burden on customers thanks to the inflated and unmet expectations around AI that is poorly implemented in the product UX. Don’t think it’s going to be bought just because it using  AI in a novel way. Customers aren’t sitting around wishing for “disruption” from your product; quite the opposite. AI or not, you need to make the customer the hero. Your AI will shine when it delivers an outsized UX outcome for your users (27:49) What kind of UX are you delivering right out of the box when a customer tries out your AI product or feature? Did you design it for tire kicking, playing around, and user stress testing? Or just an idealistic happy path? GenAI features inside b2b products should surface capabilities and constraints particularly around where users can create value for themselves quickly.  Natural hints and well-designed prompt nudges in LLMs for example are important to users and to your product team: because you’re setting a more realistic expectation of what’s possible with customers and helping them get to an outcome sooner. You’re also teaching them how to use your solution to get the most value—without asking them to go read a manual. (38:21)

Generative AI has transformed the financial services sector, sparking interest at all organizational levels. As AI becomes more accessible, professionals are exploring its potential to enhance their work. How can AI tools improve personalization and fraud detection? What efficiencies can be gained in product development and internal processes? These are the questions driving the adoption of AI as companies strive to innovate responsibly while maximizing value. Andrew serves as the Chief Data Officer for Mastercard, leading the organization’s data strategy and innovation efforts while navigating current and future data risks. Andrews's prior roles at Mastercard include Senior Vice President, Data Management, in which he was responsible for the quality, collection, and use of data for Mastercard’s information services and advisory business, and Mastercard’s Deputy Chief Privacy Officer, in which he was responsible for privacy and data protection issues globally for Mastercard. Andrew also spent many years as a Privacy & Intellectual Property Council advising direct marketing services, interactive advertising, and industrial chemicals industries. Andrew holds Juris Doctor from Columbia University School of Law and has his bachelor’s degree, cum laude, in Chemical Engineering from the University of Delaware. Andrew is a retired member of the State Bar of New York. In the episode, Adel and Andrew explore GenAI's transformative impact on financial services, the democratization of AI tools, efficiency gains in product development, the importance of AI governance and data quality, the cultural shifts and regulatory landscapes shaping AI's future, and much more. Links Mentioned in the Show: MastercardConnect with AndrewSkill Track: Artificial Intelligence (AI) LeadershipRelated Episode: How Generative AI is Changing Leadership with Christie Smith, Founder of the Humanity Institute and Kelly Monahan, Managing Director, Research InstituteSign up to attend RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

As businesses collect more data than ever, the question arises: is bigger always better? Companies are beginning to question whether massive datasets and complex infrastructures are truly delivering results or just adding unnecessary costs. How can you align your data strategy with your actual needs? Could focusing on smaller, more manageable datasets improve efficiency and save resources while still delivering valuable insights? Dr. Madelaine Daianu is the Head of Data & AI at Credit Karma, Inc. Before joining the company in June 2023, she served as Head of Data and Pricing at Belong Home, Inc. Earlier in her career, Daianu has held numerous senior roles in data science and machine learning at The RealReal, Facebook, and Intuit. Daianu earned a Bachelor of Applied Science in Bioengineering and Mathematics from the University of Illinois at Chicago and a Ph.D. in Bioengineering and Biomedical Engineering from the University of California, Los Angeles. In the episode, Richie and Madelaine explore generative AI applications at Credit Karma, the importance of data infrastructure, the role of explainability in fintech, strategies for scaling AI processes, and much more. Links Mentioned in the Show: Credit KarmaConnect with MaddieSkill Track: AI Business FundamentalsRelated Episode: Effective Product Management for AI with Marily Nika, Gen AI Product Lead at Google AssistantSign up to attend RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Generative AI with SAP and Amazon Bedrock: Utilizing GenAI with SAP and AWS Business Use Cases

Explore Generative AI and understand its key concepts, architecture, and tangible business use cases. This book will help you develop the skills needed to use SAP AI Core service features available in the SAP Business Technology Platform. You’ll examine large language model (LLM) concepts and gain the practical knowledge to unleash the best use of Gen AI. As you progress, you’ll learn how to get started with your own LLM models and work with Generative AI use cases. Additionally, you’ll see how to take advantage Amazon Bedrock stack using AWS SDK for ABAP. To fully leverage your knowledge, Generative AI with SAP and Amazon Bedrock offers practical step-by-step instructions for how to establish a cloud SAP BTP account model and create your first GenAIartifacts. This work is an important prerequisite for those who want to take full advantage of generative AI with SAP. What You Will Learn Master the concepts and terminology of artificial intelligence and GenAI Understand opportunities and impacts for different industries with GenAI Become familiar with SAP AI Core, Amazon Bedrock, AWS SDK for ABAP and develop your firsts GenAI projects Accelerate your development skills Gain more productivity and time implementing GenAI use cases Who this Book Is For Anyone who wants to learn about Generative AI for Enterprise and SAP practitioners who want to take advantage of AI within the SAP ecosystem to support their systems and workflows.

Today, we’re joined by Zach Wasserman, Co-Founder of Fleet, open-source device management for IT and security teams with thousands of laptops and servers. We talk about:  Best ways to build trust with usersImpacts of AI on open source, including using gen AI to describe human-created queriesCross-platform endpoint managementDetermining the scope of device management with BYOD & less traditional computing devicesDevice management surprises

Summary In this episode of the Data Engineering Podcast Bartosz Mikulski talks about preparing data for AI applications. Bartosz shares his journey from data engineering to MLOps and emphasizes the importance of data testing over software development in AI contexts. He discusses the types of data assets required for AI applications, including extensive test datasets, especially in generative AI, and explains the differences in data requirements for various AI application styles. The conversation also explores the skills data engineers need to transition into AI, such as familiarity with vector databases and new data modeling strategies, and highlights the challenges of evolving AI applications, including frequent reprocessing of data when changing chunking strategies or embedding models.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Bartosz Mikulski about how to prepare data for use in AI applicationsInterview IntroductionHow did you get involved in the area of data management?Can you start by outlining some of the main categories of data assets that are needed for AI applications?How does the nature of the application change those requirements? (e.g. RAG app vs. agent, etc.)How do the different assets map to the stages of the application lifecycle?What are some of the common roles and divisions of responsibility that you see in the construction and operation of a "typical" AI application?For data engineers who are used to data warehousing/BI, what are the skills that map to AI apps?What are some of the data modeling patterns that are needed to support AI apps?chunking strategies metadata managementWhat are the new categories of data that data engineers need to manage in the context of AI applications?agent memory generation/evolution conversation history managementdata collection for fine tuningWhat are some of the notable evolutions in the space of AI applications and their patterns that have happened in the past ~1-2 years that relate to the responsibilities of data engineers?What are some of the skills gaps that teams should be aware of and identify training opportunities for?What are the most interesting, innovative, or unexpected ways that you have seen data teams address the needs of AI applications?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI applications and their reliance on data?What are some of the emerging trends that you are paying particular attention to?Contact Info WebsiteLinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links SparkRayChunking StrategiesHypothetical document embeddingsModel Fine TuningPrompt CompressionThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

In this podcast episode, we talked with Alexander Guschin about launching a career off Kaggle.

About the Speaker: Alexander Guschin is a Machine Learning Engineer with 10+ years of experience, a Kaggle Grandmaster ranked 5th globally, and a teacher to 100K+ students. He leads DS and SE teams and contributes to open-source ML tools. 0:00 Starting with Machine Learning: Challenges and Early Steps 13:05 Community and Learning Through Kaggle Sessions 17:10 Broadening Skills Through Kaggle Participation 18:54 Early Competitions and Lessons Learned 21:10 Transitioning to Simpler Solutions Over Time
23:51 Benefits of Kaggle for Starting a Career in Machine Learning
29:08 Teamwork vs. Solo Participation in Competitions
31:14 Schoolchildren in AI Competitions 42:33 Transition to Industry and MLOps 50:13 Encouraging teamwork in student projects 50:48 Designing competitive machine learning tasks 52:22 Leaderboard types for tracking performance 53:44 Managing small-scale university classes 54:17 Experience with Coursera and online teaching 59:40 Convincing managers about Kaggle's value 61:38 Secrets of Kaggle competition success 63:11 Generative AI's impact on competitive ML 65:13 Evolution of automated ML solutions 66:22 Reflecting on competitive data science experience

🔗 CONNECT WITH ALEXANDER GUSCHINLinkedin - https://www.linkedin.com/in/1aguschin/Website - https://www.aguschin.com/

🔗 CONNECT WITH DataTalksClub Join DataTalks.Club:⁠⁠⁠⁠https://datatalks.club/slack.html⁠⁠⁠⁠ Our events:⁠⁠⁠⁠https://datatalks.club/events.html⁠⁠⁠⁠ Datalike Substack -⁠⁠⁠⁠https://datalike.substack.com/⁠⁠⁠⁠ LinkedIn:⁠⁠⁠⁠  / datatalks-club  ⁠

Building AI-Powered Products

Drawing from her experience at Google and Meta, Dr. Marily Nika delivers the definitive guide for product managers building AI and GenAI powered products. Packed with smart strategies, actionable tools, and real-world examples, this book breaks down the complex world of AI agents and generative AI products into a playbook for driving innovation to help product leaders bridge the gap between niche AI and GenAI technologies and user pain points. Whether you're already leading product teams or are an aspiring product manager, and regardless of your prior knowledge with AI, this guide will empower you to confidently navigate every stage of the AI product lifecycle. Confidently manage AI product development with tools, frameworks, strategic insights, and real-world examples from Google, Meta, OpenAI, and more Lead product orgs to solve real problems via agentic AI and GenAI capabilities Gain AI Awareness and technical fluency to work with AI models, LLMs, and the algorithms that power them; get cross-functional alignment; make strategic trade-offs; and set OKRs

Thought leadership is more than just a buzzword—it's a strategic tool that can significantly influence business decisions and relationships. But what makes thought leadership effective? How do you ensure your insights are not only heard but also trusted and acted upon? What role does generative AI play in enhancing the storytelling process, and how can it be leveraged to create compelling narratives that resonate with your audience? Cindy Anderson is the Chief Marketing Officer/Global Lead for Engagement & Eminence at the IBM Institute for Business Value (IBV).  She has co-authored research reports, published numerous articles, and delivered presentations on thought leadership, diversity, strategy implementation, project management, and technology to global audiences. She oversees a team of 30 editors, designers, and social media/email marketers. She is a founding board member of the Global Thought Leadership Institute at APQC, a new association that advances the practice of thought leadership. Anthony Marshall is the Chair of the Board of Advisors for The Global Thought Leadership Institute at APQC and the Senior Research Director of thought leadership at the IBM Institute for Business Value (IBV), leading the top-rated thought leadership and analysis program. He oversees a global team of 60 technology and industry experts, statisticians, economists, and analysts. Anthony conducts original thought leadership and has authored dozens of refereed articles and studies on topics including generative AI, innovation, digital and business transformation and ecosystems, open collaboration and skills. In the episode, Richie, Cindy, and Anthony explore the framework for thought leadership storytelling, the role of generative AI in thought leadership, the ROI of thought leadership, building trust and quality in research, and much more. Links Mentioned in the Show: The ROI of Thought Leadership book by Cindy and AnthonyAPQCConnect with Cindy and AnthonySkill Track: Artificial Intelligence (AI) LeadershipRelated Episode: How Generative AI is Changing Leadership with Christie Smith, Founder of the Humanity Institute and Kelly Monahan, Managing Director, Research InstituteSign up to RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business