The AI landscape is evolving at breakneck speed, with new capabilities emerging quarterly that redefine what's possible. For professionals across industries, this creates a constant need to reassess workflows and skills. How do you stay relevant when the technology keeps leapfrogging itself? What happens to traditional roles when AI can increasingly handle complex tasks that once required specialized expertise? With product-market fit becoming a moving target and new positions like forward-deployed engineers emerging, understanding how to navigate this shifting terrain is crucial. The winners won't just be those who adopt AI—but those who can continuously adapt as it evolves. Tomasz Tunguz is a General Partner at Theory Ventures, a $235m early-stage venture capital firm. He blogs at tomtunguz.com & co-authored Winning with Data. He has worked or works with Looker, Kustomer, Monte Carlo, Dremio, Omni, Hex, Spot, Arbitrum, Sui & many others. He was previously the product manager for Google's social media monetization team, including the Google-MySpace partnership, and managed the launches of AdSense into six new markets in Europe and Asia. Before Google, Tunguz developed systems for the Department of Homeland Security at Appian Corporation. In the episode, Richie and Tom explore the rapid investment in AI, the evolution of AI models like Gemini 3, the role of AI agents in productivity, the shifting job market, the impact of AI on customer success and product management, and much more. Links Mentioned in the Show: Theory VenturesConnect with TomTom’s BlogGavin Baker on MediumAI-Native Course: Intro to AI for WorkRelated Episode: Data & AI Trends in 2024, with Tom Tunguz, General Partner at Theory VenturesRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
talk-data.com
Topic
LLM
Large Language Models (LLM)
29
tagged
Activity Trend
Top Events
The relationship between data governance and AI quality is more critical than ever. As organizations rush to implement AI solutions, many are discovering that without proper data hygiene and testing protocols, they're building on shaky foundations. How do you ensure your AI systems are making decisions based on accurate, appropriate information? What benchmarking strategies can help you measure real improvement rather than just increased output? With AI now touching everything from code generation to legal documents, the consequences of poor quality control extend far beyond simple errors—they can damage reputation, violate regulations, or even put licenses at risk. David Colwell is the Vice President of Artificial Intelligence and Machine Learning at Tricentis, a global leader in continuous testing and quality engineering. He founded the company’s AI division in 2018 with a mission to make quality assurance more effective and engaging through applied AI innovation. With over 15 years of experience in AI, software testing, and automation, David has played a key role in shaping Tricentis’ intelligent testing strategy. His team developed Vision AI, a patented computer vision–based automation capability within Tosca, and continues to pioneer work in large language model agents and AI-driven quality engineering. Before joining Tricentis, David led testing and innovation initiatives at DX Solutions and OnePath, building automation frameworks and leading teams to deliver scalable, AI-enabled testing solutions. Based in Sydney, he remains focused on advancing practical, trustworthy applications of AI in enterprise software development. In the episode, Richie and David explore AI disasters in legal settings, the balance between AI productivity and quality, the evolving role of data scientists, and the importance of benchmarks and data governance in AI development, and much more. Links Mentioned in the Show: Tricentis 2025 Quality Transformation ReportConnect with DavidCourse: Artificial Intelligence (AI) LeadershipRelated Episode: Building & Managing Human+Agent Hybrid Teams with Karen Ng, Head of Product at HubSpotRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Combining LLMs with enterprise knowledge bases is creating powerful new agents that can transform business operations. These systems are dramatically improving on traditional chatbots by understanding context, following conversations naturally, and accessing up-to-date information. But how do you effectively manage the knowledge that powers these agents? What governance structures need to be in place before deployment? And as we look toward a future with physical AI and robotics, what fundamental computing challenges must we solve to ensure these technologies enhance rather than complicate our lives? Jun Qian is an accomplished technology leader with extensive experience in artificial intelligence and machine learning. Currently serving as Vice President of Generative AI Services at Oracle since May 2020, Jun founded and leads the Engineering and Science group, focusing on the creation and enhancement of Generative AI services and AI Agents. Previously held roles include Vice President of AI Science and Development at Oracle, Head of AI and Machine Learning at Sift, and Principal Group Engineering Manager at Microsoft, where Jun co-founded Microsoft Power Virtual Agents. Jun's career also includes significant contributions as the Founding Manager of Amazon Machine Learning at AWS and as a Principal Investigator at Verizon. In the episode, Richie and Jun explore the evolution of AI agents, the unique features of ChatGPT, the challenges and advancements in chatbot technology, the importance of data management and security in AI, and the future of AI in computing and robotics, and much more. Links Mentioned in the Show: OracleConnect with JunCourse: Introduction to AI AgentsJun at DataCamp RADARRelated Episode: A Framework for GenAI App and Agent Development with Jerry Liu, CEO at LlamaIndexRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
The enterprise adoption of AI agents is accelerating, but significant challenges remain in making them truly reliable and effective. While coding assistants and customer service agents are already delivering value, more complex document-based workflows require sophisticated architectures and data processing capabilities. How do you design agent systems that can handle the complexity of enterprise documents with their tables, charts, and unstructured information? What's the right balance between general reasoning capabilities and constrained architectures for specific business tasks? Should you centralize your agent infrastructure or purchase vertical solutions for each department? The answers lie in understanding the fundamental trade-offs between flexibility, reliability, and the specific needs of your organization. Jerry Liu is the CEO and Co-founder at LlamaIndex, the AI agents platform for automating document workflows. Previously, he led the ML monitoring team at Robust Intelligence, did self-driving AI research at Uber ATG, and worked on recommendation systems at Quora. In the episode, Richie and Jerry explore the readiness of AI agents for enterprise use, the challenges developers face in building these agents, the importance of document processing and data structuring, the evolving landscape of AI agent frameworks like LlamaIndex, and much more. Links Mentioned in the Show: LlamaIndexLlamaIndex Production Ready Framework For LLM AgentsTutorial: Model Context Protocol (MCP)Connect with JerryCourse: Retrieval Augmented Generation (RAG) with LangChainRelated Episode: RAG 2.0 and The New Era of RAG Agents with Douwe Kiela, CEO at Contextual AI & Adjunct Professor at Stanford UniversityRewatch RADAR AI New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Welcome to DataFramed Industry Roundups! In this series of episodes, Adel & Richie sit down to discuss the latest and greatest in data & AI. In this episode, we touch upon the launch of OpenAI’s O3 and O4-mini models, Meta’s rocky release of Llama 4, Google’s new agent tooling ecosystem, the growing arms race in AI, the latest from the Stanford AI Index report, the plausibility of AGI and superintelligence, how agents might evolve in the enterprise, global attitudes toward AI, and a deep dive into the speculative—but chilling—AI 2027 scenario. All that, Easter rave plans, and much more. Links Mentioned in the Show: Introducing OpenAI o3 and o4-miniThe Median: Scaling Models or Scaling People? Llama 4, A2A, and the State of AI in 2025LLama 4Google: Announcing the Agent2Agent Protocol (A2A)Stanford University's Human Centered AI Institute Releases 2025 AI Index ReportAI 2027Rewatch sessions from RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
The roles within AI engineering are as diverse as the challenges they tackle. From integrating models into larger systems to ensuring data quality, the day-to-day work of AI professionals is anything but routine. How do you navigate the complexities of deploying AI applications? What are the key steps from prototype to production? For those looking to refine their processes, understanding the full lifecycle of AI development is essential. Let's delve into the intricacies of AI engineering and the strategies that lead to successful implementation. Maxime Labonne is a Senior Staff Machine Learning Scientist at Liquid AI, serving as the head of post-training. He holds a Ph.D. in Machine Learning from the Polytechnic Institute of Paris and is recognized as a Google Developer Expert in AI/ML. An active blogger, he has made significant contributions to the open-source community, including the LLM Course on GitHub, tools such as LLM AutoEval, and several state-of-the-art models like NeuralBeagle and Phixtral. He is the author of the best-selling book “Hands-On Graph Neural Networks Using Python,” published by Packt. Paul-Emil Iusztin designs and implements modular, scalable, and production-ready ML systems for startups worldwide. He has extensive experience putting AI and generative AI into production. Previously, Paul was a Senior Machine Learning Engineer at Metaphysic.ai and a Machine Learning Lead at Core.ai. He is a co-author of The LLM Engineer's Handbook, a best seller in the GenAI space. In the episode, Richie, Maxime, and Paul explore misconceptions in AI application development, the intricacies of fine-tuning versus few-shot prompting, the limitations of current frameworks, the roles of AI engineers, the importance of planning and evaluation, the challenges of deployment, and the future of AI integration, and much more. Links Mentioned in the Show: Maxime’s LLM Course on HuggingFaceMaxime and Paul’s Code Alongs on DataCampDecoding ML on SubstackConnect with Maxime and PaulSkill Track: AI FundamentalsRelated Episode: Building Multi-Modal AI Applications with Russ d'Sa, CEO & Co-founder of LiveKitRewatch sessions from RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Misconceptions about AI's capabilities and the role of data are everywhere. Many believe AI is a singular, all-knowing entity, when in reality, it's a collection of algorithms producing intelligence-like outputs. Navigating and understanding the history and evolution of AI, from its origins to today's advanced language models is crucial. How do these developments, and misconceptions, impact your daily work? Are you leveraging the right tools for your needs, or are you caught up in the allure of cutting-edge technology without considering its practical application? Andriy Burkov is the author of three widely recognized books, The Hundred-Page Machine Learning Book, The Machine Learning Engineering Book, and recently The Hundred-Page Language Models book. His books have been translated into a dozen languages and are used as textbooks in many universities worldwide. His work has impacted millions of machine learning practitioners and researchers. He holds a Ph.D. in Artificial Intelligence and is a recognized expert in machine learning and natural language processing. As a machine learning expert and leader, Andriy has successfully led dozens of production-grade AI projects in different business domains at Fujitsu and Gartner. Andriy is currently Machine Learning Lead at TalentNeuron. In the episode, Richie and Andriy explore misconceptions about AI, the evolution of AI from the 1950s, the relevance of 20th-century AI research, the role of linear algebra in AI, the resurgence of recurrent neural networks, advancements in large language model architectures, the significance of reinforcement learning, the reality of AI agents, and much more. Links Mentioned in the Show: Andriy’s books: The Hundred-page Machine Learning Book, The Hundred-page Language Models BookTalentNeuronConnect with AndriySkill Track: AI FundamentalsRelated Episode: Unlocking Humanity in the Age of AI with Faisal Hoque, Founder and CEO of SHADOKARewatch sessions from RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Welcome to DataFramed Industry Roundups! In this series of episodes, Adel & Richie sit down to discuss the latest and greatest in data & AI. In this episode, we discuss the rise of reasoning LLMs like DeepSeek R1 and the competition shaping the AI space, OpenAI’s Operator and the broader push for AI agents to control computers, and the implications of massive AI infrastructure investments like Project Stargate. We also touch on Google’s overlooked AI advancements, the challenges of AI adoption, the potential of Replit’s mobile app for building apps with natural language, and much more. Links Mentioned in the Show: YouTube Tutorial: Fine Tune DeepSeek R1 | Build a Medical ChatbotOpenAI Deep ResearchOpen OperatorGemini 2.0Lex Fridman Podcast Episode on DeepSeekRemoving Barriers to American Leadership in Artificial IntelligencePresident's Council of Advisors on Science and TechnologyProject Stargate announcements from OpenAI, SoftbankSam Altman's quest for $7tnReplit Mobile AppSign up to attend RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
As multimodal AI continues to grow, professionals are exploring new skills to harness its potential. From understanding real-time APIs to navigating new application architectures, the landscape is shifting. How can developers stay ahead in this evolving field? What opportunities do AI agents present for automating tasks and enhancing productivity? And how can businesses ensure they're ready for the future of AI-driven interactions? Russ D'Sa is the CEO & Co-founder at Livekit. Russ is building the transport layer for AI computing. He founded Livekit, the company that powers voice chat for OpenAI and Character.ai. Previously, he was a Product Manager at Medium and an engineer at Twitter. He's also a serial entrepreneur, having previously founded mobile search platform Evie Labs. In the episode, Richie and Russ explore the evolution of voice AI, the challenges of building voice applications, the rise of video AI, the implications of deep fakes, the potential of AI-generated worlds, the future of AI in customer service and education, and much more. Links Mentioned in the Show: LiveKitChatGPT VoiceCourse: Developing LLM Applications with LangChainRelated Episode: Creating High Quality AI Applications with Theresa Parker & Sudhi Balan, Rocket SoftwareRewatch sessions from RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
As AI continues to advance, natural language processing (NLP) is at the forefront, transforming how businesses interact with data. From chatbots to document analysis, NLP offers numerous applications. But with the advent of generative AI, professionals face new challenges: When is it appropriate to use traditional NLP techniques versus more advanced models? How do you balance the costs and benefits of these technologies? Explore the strategic decisions and practical applications of NLP in the modern business world. Meri Nova is the founder of Break Into Data, a data careers company. Her work focuses on helping people switch to a career in data, and using machine learning to improve community engagement. Previously, she was a data scientist and machine learning engineer at Hyloc. Meri is the instructor of DataCamp's 'Retrieval Augmented Generation with LangChain' course. In the episode, Richie and Meri explore the evolution of natural language processing, the impact of generative AI on business applications, the balance between traditional NLP techniques and modern LLMs, the role of vector stores and knowledge graphs, and the exciting potential of AI in automating tasks and decision-making, and much more. Links Mentioned in the Show: Meri’s Breaking Into Data Handbook on GitHubBreak Into Data Discord GroupConnect with MeriSkill Track: Artificial Intelligence (AI) LeadershipRelated Episode: Industry Roundup #2: AI Agents for Data Work, The Return of the Full-Stack Data Scientist and Old languages Make a ComebackRewatch sessions from RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
2025 promises to be another transformative year for data and AI. From groundbreaking advancements in reasoning models to the rise of new challengers in generative AI, the field shows no signs of slowing down. Last week Jonathan and Martijn scored their 2024 predictions, and scored highly, but what's in store for 2025? Building on the insights from their 2024 predictions, we'll assess the future of generative AI, the evolving role of AI in education, the growing importance of synthetic data, and much more. In the episode, Richie, Jo, and Martijn discuss whether OpenAI and Google will maintain their dominance or face disruption from new players like Meta’s Llama and XAI’s Grok, the implications of recent breakthroughs in AI reasoning, the rise of short-form video generation AI in social media and advertising, the challenges Europe faces in keeping pace with the US and China in AI innovation and much more. Links Mentioned in the Show: Data & AI Trends & Predictions 2025Skill Track: AI Business FundamentalsRelated Episode: Reviewing Our Data Trends & Predictions of 2024 with DataCamp's CEO & COO, Jonathan Cornelissen & Martijn TheuwissenRewatch sessions from RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
AI is not just about writing code; it's about improving the entire software development process. From generating documentation to automating code reviews, AI tools are becoming indispensable. But how do you ensure the quality of AI-generated code? What strategies can you employ to maintain high standards while leveraging AI's capabilities? These are the questions developers must consider as they incorporate AI into their workflows. Eran Yahav is an associate professor at the Computer Science Department at the Technion – Israel Institute of Technology and co-founder and CTO of Tabnine (formerly Codota). Prior to that, he was a research staff member at the IBM T.J. Watson Research Center in New York (2004-2010). He received his Ph.D. from Tel Aviv University (2005) and his B.Sc. from the Technion in 1996. His research interests include program analysis, program synthesis, and program verification. Eran is a recipient of the prestigious Alon Fellowship for Outstanding Young Researchers, the Andre Deloro Career Advancement Chair in Engineering, the 2020 Robin Milner Young Researcher Award (POPL talk here), the ERC Consolidator Grant as well as multiple best paper awards at various conferences. In the episode, Richie and Eran explore AI's role in software development, the balance between AI assistance and manual coding, the impact of generative AI on code review and documentation, the evolution of developer tools, and the future of AI-driven workflows, and much more. Links Mentioned in the Show: TabnineConnect with EranCourse: Working with the OpenAI APIRelated Episode: Getting Generative AI Into Production with Lin Qiao, CEO and Co-Founder of Fireworks AIRewatch sessions from RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Welcome to DataFramed Industry Roundups! In this series of episodes, Adel & Richie sit down to discuss the latest and greatest in data & AI. In this episode, we touch upon the brewing rivalry between OpenAI and Anthropic, discuss Claude's new computer use feature, Google's NotebookLM and how its implications for the UX/UI of AI products, and a lot more. Links mentioned in the show: Chatbot Arena LeaderboardNotebookLMAnthropic Computer UseIntroducing OpenAI o1-preview New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
With the recent rapid advancements in AI comes the challenge of navigating an ever-changing field of play, while ensuring the tech we use serves real-world needs. As AI becomes more ingrained in business and everyday life, how do we balance cutting-edge development with practicality and ethical responsibility? What steps are necessary to ensure AI’s growth benefits society, aligns with human values, and avoids potential risks? What similarities can we draw between the way we think, and the way AI thinks for us? Terry Sejnowski is one of the most influential figures in computational neuroscience. At the Salk Institute for Biological Studies, he runs the Computational Neurobiology Laboratory, and hold the Francis Crick Chair. At the University of California, San Diego, he is a Distinguished Professor and runs a neurobiology lab. Terry is also the President of the Neural Information Processing (NIPS) Foundation, and an organizer of the NeurIPS AI conference. Alongside Geoff Hinton, Terry co-invented the Boltzmann machine technique for machine learning. He is the author of over 500 journal articles on neuroscience and AI, and the book "ChatGPT and the Future of AI". In the episode, Richie and Terry explore the current state of AI, historical developments in AI, the NeurIPS conference, collaboration between AI and neuroscience, AI’s shift from academia to industry, large vs small LLMs, creativity in AI, AI ethics, autonomous AI, AI agents, superintelligence, and much more. Links Mentioned in the Show: NeurIPS ConferenceTerry’s Book—ChatGPT and the Future of AI: The Deep Language RevolutionConnect with TerryTerry on SubstackCourse: Data Communication ConceptsRelated Episode: Guardrails for the Future of AI with Viktor Mayer-Schönberger, Professor of Internet Governance and Regulation at the University of OxfordSign up to RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
As AI becomes more accessible, a growing question is: should machine learning experts always be the ones training models, or is there a better way to leverage other subject matter experts in the business who know the use-case best? What if getting started building AI apps required no coding skills? As businesses look to implement AI at scale, what part can no-code AI apps play in getting projects off the ground, and how feasible are smaller, tailored solutions for department specific use-cases? Birago Jones is the CEO at Pienso. Pienso is an AI platform that empowers subject matter experts in various enterprises, such as business analysts, to create and fine-tune AI models without coding skills. Prior to Pienso, Birago was a Venture Partner at Indicator Ventures and a Research Assistant at MIT Media Lab where he also founded the Media Lab Alumni Association. Karthik Dinakar is a computer scientist specializing in machine learning, natural language processing, and human-computer interaction. He is the Chief Technology Officer and co-founder at Pienso. Prior to founding Pienso, Karthik held positions at Microsoft and Deutsche Bank. Karthik holds a doctoral degree from MIT in Machine Learning. In the episode, Richie, Birago and Karthik explore why no-code AI apps are becoming more prominent, uses-cases of no-code AI apps, the steps involved in creating an LLM, the benefits of small tailored models, how no-code can impact workflows, cost in AI projects, AI interfaces and the rise of the chat interface, privacy and customization, excitement about the future of AI, and much more. Links Mentioned in the Show: PiensoGoogle Gemini for BusinessConnect with Birago and KarthikAndreesen Horowitz Report: Navigating the High Cost of AI ComputeCourse: Artificial Intelligence (AI) StrategyRelated Episode: Designing AI Applications with Robb Wilson, Co-Founder & CEO at Onereach.aiRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Perhaps the biggest complaint about generative AI is hallucination. If the text you want to generate involves facts, for example, a chatbot that answers questions, then hallucination is a problem. The solution to this is to make use of a technique called retrieval augmented generation, where you store facts in a vector database and retrieve the most appropriate ones to send to the large language model to help it give accurate responses. So, what goes into building vector databases and how do they improve LLM performance so much? Ram Sriharsha is currently the CTO at Pinecone. Before this role, he was the Director of Engineering at Pinecone and previously served as Vice President of Engineering at Splunk. He also worked as a Product Manager at Databricks. With a long history in the software development industry, Ram has held positions as an architect, lead product developer, and senior software engineer at various companies. Ram is also a long time contributor to Apache Spark. In the episode, Richie and Ram explore common use-cases for vector databases, RAG in chatbots, steps to create a chatbot, static vs dynamic data, testing chatbot success, handling dynamic data, choosing language models, knowledge graphs, implementing vector databases, innovations in vector data bases, the future of LLMs and much more. Links Mentioned in the Show: PineconeWebinar - Charting the Path: What the Future Holds for Generative AICourse - Vector Databases for Embeddings with PineconeRelated Episode: The Power of Vector Databases and Semantic Search with Elan Dekel, VP of Product at PineconeRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business
By now, many of us are convinced that generative AI chatbots like ChatGPT are useful at work. However, many executives are rightfully worried about the risks from having business and customer conversations recorded by AI chatbot platforms. Some privacy and security-conscious organizations are going so far as to block these AI platforms completely. For organizations such as EY, a company that derives value from its intellectual property, leaders need to strike a balance between privacy and productivity. John Thompson runs the department for the ideation, design, development, implementation, & use of innovative Generative AI, Traditional AI, & Causal AI solutions, across all of EY's service lines, operating functions, geographies, & for EY's clients. His team has built the world's largest, secure, private LLM-based chat environment. John also runs the Marketing Sciences consultancy, advising clients on monetization strategies for data. He is the author of four books on data, including "Data for All' and "Causal Artificial Intelligence". Previously, he was the Global Head of AI at CSL Behring, an Adjunct Professor at Lake Forest Graduate School of Management, and an Executive Partner at Gartner. In the episode, Richie and John explore the adoption of GenAI at EY, data privacy and security, GenAI use cases and productivity improvements, GenAI for decision making, causal AI and synthetic data, industry trends and predictions and much more. Links Mentioned in the Show: Azure OpenAICausality by Judea Pearl[Course] AI EthicsRelated Episode: Data & AI at Tesco with Venkat Raghavan, Director of Analytics and Science at TescoCatch John talking about AI Maturity this SeptemberRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
This special episode of DataFramed was made in collaboration with Analytics on Fire! Nowadays, the hype around generative AI is only the tip of the iceberg. There are so many ideas being touted as the next big thing that it’s difficult to keep up. More importantly, it’s challenging to discern which ideas will become the next ChatGPT and which will end up like the next NFT. How do we cut through the noise? Mico Yuk is the Community Manager at Acryl Data and Co-Founder at Data Storytelling Academy. Mico is also an SAP Mentor Alumni, and the Founder of the popular weblog, Everything Xcelsius and the 'Xcelsius Gurus’ Network. She was named one of the Top 50 Analytics Bloggers to follow, as-well-as a high-regarded BI influencer and sought after global keynote speaker in the Analytics ecosystem. In the episode, Richie and Mico explore AI and productivity at work, the future of work and AI, GenAI and data roles, AI for training and learning, training at scale, decision intelligence, soft skills for data professionals, genAI hype and much more. Links Mentioned in the Show: Analytics on Fire PodcastData Visualization for Dummies by Mico Yuk and Stephanie DiamondConnect with Miko[Skill Track] AI FundamentalsRelated Episode: What to Expect from AI in 2024 with Craig S. Smith, Host of the Eye on A.I PodcastRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Despite GPT, Claude, Gemini, LLama and the other host of LLMs that we have access to, a variety of organizations are still exploring their options when it comes to custom LLMs. Logging in to ChatGPT is easy enough, and so is creating a 'custom' openAI GPT, but what does it take to create a truly custom LLM? When and why might this be useful, and will it be worth the effort? Vincent Granville is a pioneer in the AI and machine learning space, he is Co-Founder of Data Science Central, Founder of MLTechniques.com, former VC-funded executive, author, and patent owner. Vincent’s corporate experience includes Visa, Wells Fargo, eBay, NBC, Microsoft, and CNET. He is also a former post-doc at Cambridge University and the National Institute of Statistical Sciences. Vincent has published in the Journal of Number Theory, Journal of the Royal Statistical Society, and IEEE Transactions on Pattern Analysis and Machine Intelligence. He is the author of multiple books, including “Synthetic Data and Generative AI”. In the episode, Richie and Vincent explore why you might want to create a custom LLM including issues with standard LLMs and benefits of custom LLMs, the development and features of custom LLMs, architecture and technical details, corporate use cases, technical innovations, ethics and legal considerations, and much more. Links Mentioned in the Show: Read Articles by VincentSynthetic Data and Generative AI by Vincent GranvilleConnect with Vincent on Linkedin[Course] Developing LLM Applications with LangChainRelated Episode: The Power of Vector Databases and Semantic Search with Elan Dekel, VP of Product at PineconeRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Memory, the foundation of human intelligence, is still one of the most complex and mysterious aspects of the brain. Despite decades of research, we've only scratched the surface of understanding how our memories are formed, stored, and retrieved. But what if AI could help us crack the code on memory? How might AI be the key to unlocking problems that have evaded human cognition for so long? Kim Stachenfeld is a Senior Research Scientist at Google DeepMind in NYC and Affiliate Faculty at the Center for Theoretical Neuroscience at Columbia University. Her research covers topics in Neuroscience and AI. On the Neuroscience side, she study how animals build and use models of their world that support memory and prediction. On the Machine Learning side, she works on implementing these cognitive functions in deep learning models. Kim’s work has been featured in The Atlantic, Quanta Magazine, Nautilus, and MIT Technology Review. In 2019, she was named one of MIT Tech Review’s Innovators under 35 for her work on predictive representations in hippocampus. In the episode, Richie and Kim explore her work on Google Gemini, the importance of customizability in AI models, the need for flexibility and adaptability in AI models, retrieval databases and how they improve AI response accuracy, AI-driven science, the importance of augmenting human capabilities with AI and the challenges associated with this goal, the intersection of AI, neuroscience and memory and much more. Links Mentioned in the Show: DeepMindAlphaFoldDr James Whittington - A unifying framework for frontal and temporal representation of memoryPaper - Language models show human-like content effects onreasoning tasksKim’s Website[Course] Artificial Intelligence (AI) StrategyRelated Episode: Making Better Decisions using Data & AI with Cassie Kozyrkov, Google's First Chief Decision ScientistSign up to RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business