talk-data.com talk-data.com

Topic

GenAI

Generative AI

ai machine_learning llm

1517

tagged

Activity Trend

192 peak/qtr
2020-Q1 2026-Q1

Activities

1517 activities · Newest first

Rapid changes demand innovative decision-making tools beyond traditional methods. Businesses are turning to AI, BI, and data science to gain a competitive edge. The perfect blend of these technologies can be a true differentiator.

- Take a quick look at what to expect from this session
- Challenges in data and analytics today
- Unlocking the power of AI, BI, and data science
- The transformative role of AI-powered self-service BI platforms
- Live demos of next-generation analytics in action

Learn how these innovations can drive better decisions to deliver transformative business outcomes.

AI architects are challenged to define a technology roadmap in an environment that is currently defined by a rapid pace of AI technology, specifically with generative AI initiatives, evolving product capabilities, needs for up-skilling, all of this with the goal of improving business outcomes. This session will provide the framework, architectures and tools to define such a roadmap.

Moving AI projects from pilot to production requires substantial effort for most enterprises. AI Engineering provides the foundation for enterprise delivery of AI and generative AI solutions at scale unifying DataOps, MLOps and DevOps practices. This session will highlight AI engineering best practices across these dimensions covering people, processes and technology.

Today, I'm chatting with Stuart Winter-Tear about AI product management. We're getting into the nitty-gritty of what it takes to build and launch LLM-powered products for the commercial market that actually produce value. Among other things in this rich conversation, Stuart surprised me with the level of importance he believes UX has in making LLM-powered products successful, even for technical audiences.

After spending significant time on the forefront of AI’s breakthroughs, Stuart believes many of the products we’re seeing today are the result of FOMO above all else. He shares a belief that I’ve emphasized time and time again on the podcast–product is about the problem, not the solution. This design philosophy has informed Staurt’s 20-plus year-long career, and it is pivotal to understanding how to best use AI to build products that meet users’ needs.

Highlights/ Skip to 

Why Stuart was asked to speak to the House of Lords about AI (2:04) The LLM-powered products has Stuart been building recently (4:20) Finding product-market fit with AI products (7:44) Lessons Stuart has learned over the past two years working with LLM-power products (10:54)  Figuring out how to build user trust in your AI products (14:40) The differences between being a digital product manager vs. AI product manager (18:13) Who is best suited for an AI product management role (25:42) Why Stuart thinks user experience matters greatly with AI products (32:18) The formula needed to create a business-viable AI product (38:22)  Stuart describes the skills and roles he thinks are essential in an AI product team and who he brings on first (50:53) Conversations that need to be had with academics and data scientists when building AI-powered products (54:04) Final thoughts from Stuart and where you can find more from him (58:07)

Quotes from Today’s Episode

“I think that the core dream with GenAI is getting data out of IT hands and back to the business. Finding a way to overlay all this disparate, unstructured data and [translate it] to the human language is revolutionary. We’re finding industries that you would think were more conservative (i.e. medical, legal, etc.) are probably the most interested because of the large volumes of unstructured data they have to deal with. People wouldn’t expect large language models to be used for fact-checking… they’re actually very powerful, especially if you can have your own proprietary data or pipelines. Same with security–although large language models introduce a terrifying amount of security problems, they can also be used in reverse to augment security. There’s a lovely contradiction with this technology that I do enjoy.” - Stuart Winter-Tear (5:58) “[LLM-powered products] gave me the wow factor, and I think that’s part of what’s caused the problem. If we focus on technology, we build more technology, but if we focus on business and customers, we’re probably going to end up with more business and customers. This is why we end up with so many products that are effectively solutions in search of problems. We’re in this rush and [these products] are [based on] FOMO. We’re leaving behind what we understood about [building] products—as if [an LLM-powered product] is a special piece of technology. It’s not. It’s another piece of technology. [Designers] should look at this technology from the prism of the business and from the prism of the problem. We love to solutionize, but is the problem the problem? What’s the context of the problem? What’s the problem under the problem? Is this problem worth solving, and is GenAI a desirable way to solve it? We’re putting the cart before the horse.” - Stuart Winter-Tear (11:11) “[LLM-powered products] feel most amazing when you’re not a domain expert in whatever you’re using it for. I’ll give you an example: I’m terrible at coding. When I got my hands on Cursor, I felt like a superhero. It was unbelievable what I could build. Although [LLM products] look most amazing in the hands of non-experts, it’s actually most powerful in the hands of experts who do understand the domain they’re using this technology. Perhaps I want to do a product strategy, so I ask [the product] for some assistance, and it can get me 70% of the way there. [LLM products] are great as a jumping off point… but ultimately [they are] only powerful because I have certain domain expertise.” - Stuart Winter-Tear (13:01) “We’re so used to the digital paradigm. The deterministic nature of you put in X, you get out Y; it’s the same every time. Probabilistic changes every time. There is a huge difference between what results you might be getting in the lab compared to what happens in the real world. You effectively find yourself building [AI products] live, and in order to do that, you need good communities and good feedback available to you. You need these fast feedback loops. From a pure product management perspective, we used to just have the [engineering] timeline… Now, we have [the data research timeline]. If you’re dealing with cutting-edge products, you’ve got these two timelines that you’re trying to put together, and the data research one is very unpredictable. It’s the nature of research. We don’t necessarily know when we’re going to get to where we want to be.” - Stuart Winter-Tear (22:25) “I believe that UX will become the #1 priority for large language model products. I firmly believe whoever wins in UX will win in this large language model product world.  I’m against fully autonomous agents without human intervention for knowledge work. We need that human in the loop. What was the intent of the user? How do we get that right push back from the large language model to understand even the level of the person that they’re dealing with? These are fundamental UX problems that are going to push UX to the forefront… This is going to be on UX to educate the user, to be able to inject the user in at the right time to be able to make this stuff work. The UX folk who do figure this out are going to create the breakthrough and create the mass adoption.” - Stuart Winter-Tear (33:42)

CDAOs and AI leaders often struggle to get started with GenAI. Attend this session to understand the first critical components you need to build or buy: Data, AI Engineering tools, a search and retrieval system, the application, and the right types of models. With these building blocks, you can build several working GenAI prototypes to help you prove the value and justify further investments.

Organisations adopting a Data Mesh framework often face challenges in ensuring regulatory compliance, transforming data assets into scalable products, and maintaining governance. Explore how NatWest addresses these complexities by integrating knowledge graphs with GenAI and LLMs to enhance data discovery, enforce governance policies, and accelerate product development. Learn how this approach strengthens regulatory data qualifications, automates metadata management, and delivers faster, more reliable insights— to build and scale AI-driven data products yielding a potential 10x efficiency gain.

CDAOs are making investments daily. Perhaps you're looking to grow your team, or maybe making a technology investment to support GenAI, or another investment where you need to build buy-in and gain funding. This workshop will help you develop personal influence skills while also building a strong story for investment.

D&A value is not possible without data storytelling that offers a better way to engage communication findings than just BI reporting or data science notebooks. Join this session to know about the fundamentals of data storytelling and how to fill the gap between data science speakers and decision makers. It further discusses how to tell the best data storytelling and how to upscale data storytelling for future in landscape of GenAI.

GenAI may excelling in some areas but fail in others, complicating the business case. CIOs often underestimate costs, which can scale by 500%-1000%. Productivity is a key benefit but hard to prove to CFOs. Should you invest in chat-style GenAI for 100K employees or near-custom AI for 50 R&D staff? Join our session to master cost scaling, value harvesting beyond productivity, and creating a portfolio of GenAI investment options.

To scale generative AI and drive real value, enterprises need to get the most from their data. Business leaders must shift their focus to creating efficient, secure and scalable data foundations to train their AI. Given that 80% of enterprise data is unstructured, driving performant and accurate AI requires a strategy to unify structured and unstructured data as well as solutions to enable data access verification, integration and governance across disparate environments. Hear how leading organizations are successfully leveraging their data to fuel growth and trust across their business.

Alessandro Allini, Head of Data Governance at Crédit Agricole Italia, and Stephen Brobst, CTO at Ab Initio, discuss the revolutionary impact of shifting from treating “Data as an Asset” to “Data as a Product”. You will gain insights on designing, manufacturing, and measuring data success with this approach. Alessandro will discuss the corporate journey toward Data Products that is empowering business knowledge workers with self-service capabilities and optimized data design for seamless consumption. Stephen will reveal how AI co-pilots accelerate data product creation and drive innovation.

Enterprise-grade GenAI needs a unified data strategy for accurate, reliable results. Learn how knowledge graphs make structured and unstructured data AI-ready while enabling governance and transparency. See how GraphRAG (retrieval-augmented generation with knowledge graphs) drives real success: a major gaming company achieved 10x faster insights, while Data2 cut workloads by 50%. Discover how knowledge graphs and GraphRAG create a foundation for trustworthy agentic AI systems across retail, healthcare, finance, and more.

podcast_episode
by Scott Abrahams (Louisiana State University) , Frank Levy (Duke University (Fuqua School of Business)) , Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

Will generative artificial intelligence lead to nirvana or dystopia? Great question, which we don’t exactly answer in this week’s podcast, but we do weigh the most critical downstream effects of the technology based on recent research done by urban economists Frank Levy and Scott Abrahams. We assess how GenAI impacts the benefits of a college degree, the nation’s political dynamics, and which metro area economies will win (think Savannah) and lose (think San Francisco). Guests: Frank Levy, Visitor in the Strategy Group of the Fuqua School of Business, Duke University, and Scott Abrahams, Professor of Economics at Louisiana State University Read Frank and Scott's recent research on Gen AI here: From San Francisco to Savannah? The Downstream Effects of Generative AI (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4874104) Hosts: Mark Zandi – Chief Economist, Moody’s Analytics, Cris deRitis – Deputy Chief Economist, Moody’s Analytics, Marisa DiNatale – Senior Director - Head of Global Forecasting, Moody’s Analytics Follow Mark Zandi on 'X', BlueSky or LinkedIn @MarkZandi, Cris deRitis on LinkedIn, and Marisa DiNatale on LinkedIn

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Data science isn't just about models and code—it's about people, connections, and shared knowledge. From online forums to in-person hackathons, communities play a crucial role in shaping careers and innovations. In this episode, we're joined by Yujian Tang — an expert in building and fostering data-driven communities — to discuss how these spaces can help you grow and why developer advocacy is a great role for people passionate about both building and fostering community. Whether you're looking to expand your network, break into data science, or grow your own community, this episode is packed with actionable insights from someone who has done it all. What You'll Learn: How getting involved in different communities can accelerate your career and open up unexpected opportunities The key differences between various data communities and how to find the right fit What it takes to build and nurture a thriving community of your own The evolving role of the Developer Advocate in growing data product visibility through community.   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   Interested in learning more about GenAI? 👉 https://lu.ma/oss4ai   Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

Supported by Our Partners •⁠ Modal⁠ — The cloud platform for building AI applications •⁠ CodeRabbit⁠⁠ — Cut code review time and bugs in half. Use the code PRAGMATIC to get one month free. — What happens when LLMs meet real-world codebases? In this episode of The Pragmatic Engineer,  I am joined by Varun Mohan, CEO and Co-Founder of Windsurf. Varun talks me through the technical challenges of building an AI-native IDE (Windsurf) —and how these tools are changing the way software gets built.  We discuss:  • What building self-driving cars taught the Windsurf team about evaluating LLMs • How LLMs for text are missing capabilities for coding like “fill in the middle” • How Windsurf optimizes for latency • Windsurf’s culture of taking bets and learning from failure • Breakthroughs that led to Cascade (agentic capabilities) • Why the Windsurf teams build their LLMs • How non-dev employees at Windsurf build custom SaaS apps – with Windsurf! • How Windsurf empowers engineers to focus on more interesting problems • The skills that will remain valuable as AI takes over more of the codebase • And much more! — Timestamps (00:00) Intro (01:37) How Windsurf tests new models (08:25) Windsurf’s origin story  (13:03) The current size and scope of Windsurf (16:04) The missing capabilities Windsurf uncovered in LLMs when used for coding (20:40) Windsurf’s work with fine-tuning inside companies  (24:00) Challenges developers face with Windsurf and similar tools as codebases scale (27:06) Windsurf’s stack and an explanation of FedRAMP compliance (29:22) How Windsurf protects latency and the problems with local data that remain unsolved (33:40) Windsurf’s processes for indexing code  (37:50) How Windsurf manages data  (40:00) The pros and cons of embedding databases  (42:15) “The split brain situation”—how Windsurf balances present and long-term  (44:10) Why Windsurf embraces failure and the learnings that come from it (46:30) Breakthroughs that fueled Cascade (48:43) The insider’s developer mode that allows Windsurf to dogfood easily  (50:00) Windsurf’s non-developer power user who routinely builds apps in Windsurf (52:40) Which SaaS products won’t likely be replaced (56:20) How engineering processes have changed at Windsurf  (1:00:01) The fatigue that goes along with being a software engineer, and how AI tools can help (1:02:58) Why Windsurf chose to fork VS Code and built a plugin for JetBrains  (1:07:15) Windsurf’s language server  (1:08:30) The current use of MCP and its shortcomings  (1:12:50) How coding used to work in C#, and how MCP may evolve  (1:14:05) Varun’s thoughts on vibe coding and the problems non-developers encounter (1:19:10) The types of engineers who will remain in demand  (1:21:10) How AI will impact the future of software development jobs and the software industry (1:24:52) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • IDEs with GenAI features that Software Engineers love • AI tooling for Software Engineers in 2024: reality check • How AI-assisted coding will change software engineering: hard truths • AI tools for software engineers, but without the hype — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

The roles within AI engineering are as diverse as the challenges they tackle. From integrating models into larger systems to ensuring data quality, the day-to-day work of AI professionals is anything but routine. How do you navigate the complexities of deploying AI applications? What are the key steps from prototype to production? For those looking to refine their processes, understanding the full lifecycle of AI development is essential. Let's delve into the intricacies of AI engineering and the strategies that lead to successful implementation. Maxime Labonne is a Senior Staff Machine Learning Scientist at Liquid AI, serving as the head of post-training. He holds a Ph.D. in Machine Learning from the Polytechnic Institute of Paris and is recognized as a Google Developer Expert in AI/ML. An active blogger, he has made significant contributions to the open-source community, including the LLM Course on GitHub, tools such as LLM AutoEval, and several state-of-the-art models like NeuralBeagle and Phixtral. He is the author of the best-selling book “Hands-On Graph Neural Networks Using Python,” published by Packt. Paul-Emil Iusztin designs and implements modular, scalable, and production-ready ML systems for startups worldwide. He has extensive experience putting AI and generative AI into production. Previously, Paul was a Senior Machine Learning Engineer at Metaphysic.ai and a Machine Learning Lead at Core.ai. He is a co-author of The LLM Engineer's Handbook, a best seller in the GenAI space. In the episode, Richie, Maxime, and Paul explore misconceptions in AI application development, the intricacies of fine-tuning versus few-shot prompting, the limitations of current frameworks, the roles of AI engineers, the importance of planning and evaluation, the challenges of deployment, and the future of AI integration, and much more. Links Mentioned in the Show: Maxime’s LLM Course on HuggingFaceMaxime and Paul’s Code Alongs on DataCampDecoding ML on SubstackConnect with Maxime and PaulSkill Track: AI FundamentalsRelated Episode: Building Multi-Modal AI Applications with Russ d'Sa, CEO & Co-founder of LiveKitRewatch sessions from RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Neste episódio, batemos um papo sobre os impactos da inteligência artificial na cibersegurança: os riscos e oportunidades que a IA Generativa traz para o setor, a realidade por trás das promessas de soluções inteligentes, e as competências que vão ser indispensáveis para quem quer construir uma carreira sólida nessa área nos próximos anos. Recebemos Claudionor Coelho, referência global em AI e Cybersecurity, com passagem pelo World Economic Forum e atuação em posições executivas dentro e fora do Brasil. Claudionor trouxe uma visão prática e estratégica sobre como a IA está moldando o presente e o futuro da segurança da informação — e se realmente podemos confiar nela como aliada. Lembrando que você pode encontrar todos os podcasts da comunidade Data Hackers no Spotify, iTunes, Google Podcast, Castbox e muitas outras plataformas. Falamos no episódio: Claudionor Coelho —  Chief AI Officer at Zscaler | GenAI Leader and Strategic Executive, Investor | XooglerNossa Bancada — Data Hackers: Monique Femme  — Head of Community Management na Data HackersPaulo Vasconcellos — Co-founder da Data Hackers e Principal Data Scientist na Hotmart Referências: 

Amazon Redshift Cookbook - Second Edition

Amazon Redshift Cookbook provides practical techniques for utilizing AWS's managed data warehousing service effectively. With this book, you'll learn to create scalable and secure data analytics solutions, tackle data integration challenges, and leverage Redshift's advanced features like data sharing and generative AI capabilities. What this Book will help me do Create end-to-end data analytics solutions from ingestion to reporting using Amazon Redshift. Optimize the performance and security of Redshift implementations to meet enterprise standards. Leverage Amazon Redshift for zero-ETL ingestion and advanced concurrency scaling. Integrate Redshift with data lakes for enhanced data processing versatility. Implement generative AI and machine learning solutions directly within Redshift environments. Author(s) Shruti Worlikar, Harshida Patel, and Anusha Challa are seasoned data experts who bring together years of experience with Amazon Web Services and data analytics. Their combined expertise enables them to offer actionable insights, hands-on recipes, and proven strategies for implementing and optimizing Amazon Redshift-based solutions. Who is it for? This book is best suited for data analysts, data engineers, and architects who are keen on mastering modern data warehouse solutions using Redshift. Readers should have some knowledge of data warehousing and familiarity with cloud concepts. Ideal for professionals looking to migrate on-premises systems or build cloud-native analytics pipelines leveraging Redshift.