talk-data.com talk-data.com

Topic

LLM

Large Language Models (LLM)

nlp ai machine_learning

9

tagged

Activity Trend

158 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Adel ×

Welcome to DataFramed Industry Roundups! In this series of episodes, Adel & Richie sit down to discuss the latest and greatest in data & AI. In this episode, we touch upon the launch of OpenAI’s O3 and O4-mini models, Meta’s rocky release of Llama 4, Google’s new agent tooling ecosystem, the growing arms race in AI, the latest from the Stanford AI Index report, the plausibility of AGI and superintelligence, how agents might evolve in the enterprise, global attitudes toward AI, and a deep dive into the speculative—but chilling—AI 2027 scenario. All that, Easter rave plans, and much more. Links Mentioned in the Show: Introducing OpenAI o3 and o4-miniThe Median: Scaling Models or Scaling People? Llama 4, A2A, and the State of AI in 2025LLama 4Google: Announcing the Agent2Agent Protocol (A2A)Stanford University's Human Centered AI Institute Releases 2025 AI Index ReportAI 2027Rewatch sessions from RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Welcome to DataFramed Industry Roundups! In this series of episodes, Adel & Richie sit down to discuss the latest and greatest in data & AI. In this episode, we discuss the rise of reasoning LLMs like DeepSeek R1 and the competition shaping the AI space, OpenAI’s Operator and the broader push for AI agents to control computers, and the implications of massive AI infrastructure investments like Project Stargate. We also touch on Google’s overlooked AI advancements, the challenges of AI adoption, the potential of Replit’s mobile app for building apps with natural language, and much more. Links Mentioned in the Show: YouTube Tutorial: Fine Tune DeepSeek R1 | Build a Medical ChatbotOpenAI Deep ResearchOpen OperatorGemini 2.0Lex Fridman Podcast Episode on DeepSeekRemoving Barriers to American Leadership in Artificial IntelligencePresident's Council of Advisors on Science and TechnologyProject Stargate announcements from OpenAI, SoftbankSam Altman's quest for $7tnReplit Mobile AppSign up to attend RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Welcome to DataFramed Industry Roundups! In this series of episodes, Adel & Richie sit down to discuss the latest and greatest in data & AI. In this episode, we touch upon the brewing rivalry between OpenAI and Anthropic, discuss Claude's new computer use feature, Google's NotebookLM and how its implications for the UX/UI of AI products, and a lot more. Links mentioned in the show: Chatbot Arena LeaderboardNotebookLMAnthropic Computer UseIntroducing OpenAI o1-preview New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

We’re improving DataFramed, and we need your help! We want to hear what you have to say about the show, and how we can make it more enjoyable for you—find out more here. Data is no longer just for coders. With the rise of low-code tools, more people across organizations can access data insights without needing programming skills. But how can companies leverage these tools effectively? And what steps should they take to integrate them into existing workflows while upskilling their teams?  Michael Berthold is CEO and co-founder at KNIME, an open source data analytics company. He has more than 25 years of experience in data science, working in academia, most recently as a full professor at Konstanz University (Germany) and previously at University of California (Berkeley) and Carnegie Mellon, and in industry at Intel’s Neural Network Group, Utopy, and Tripos. Michael has published extensively on data analytics, machine learning, and artificial intelligence. In the episode, Adel and Michael explore low-code data science, the adoption of low-code data tools, the evolution of data science workflows, upskilling, low-code and code collaboration, data literacy, integration with AI and GenAI tools, the future of low-code data tools and much more.  Links Mentioned in the Show: KNIMEConnect with MichaelCode Along: Low-Code Data Science and Analytics with KNIMECourse: Introduction to KNIMERelated Episode: No-Code LLMs In Practice with Birago Jones & Karthik Dinakar, CEO & CTO at Pienso New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Meta has been at the absolute edge of the open-source AI ecosystem, and with the recent release of Llama 3.1, they have officially created the largest open-source model to date. So, what's the secret behind the performance gains of Llama 3.1? What will the future of open-source AI look like? Thomas Scialom is a Senior Staff Research Scientist (LLMs) at Meta AI, and is one of the co-creators of the Llama family of models. Prior to joining Meta, Thomas worked as a Teacher, Lecturer, Speaker and Quant Trading Researcher.  In the episode, Adel and Thomas explore Llama 405B it’s new features and improved performance, the challenges in training LLMs, best practices for training LLMs, pre and post-training processes, the future of LLMs and AI, open vs closed-sources models, the GenAI landscape, scalability of AI models, current research and future trends and much more.  Links Mentioned in the Show: Meta - Introducing Llama 3.1: Our most capable models to dateDownload the Llama Models[Course] Working with Llama 3[Skill Track] Developing AI ApplicationsRelated Episode: Creating Custom LLMs with Vincent Granville, Founder, CEO & Chief Al Scientist at GenAltechLab.comRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Since the launch of ChatGPT, one of the trending terms outside of ChatGPT itself has been prompt engineering. This act of carefully crafting your instructions is treated as alchemy by some and science by others. So what makes an effective prompt? Alex Banks has been building and scaling AI products since 2021. He writes Sunday Signal, a newsletter offering a blend of AI advancements and broader thought-provoking insights. His expertise extends to social media platforms on X/Twitter and LinkedIn, where he educates a diverse audience on leveraging AI to enhance productivity and transform daily life. In the episode, Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, strategies for better prompting, chain of thought prompting, prompt engineering as a skill and career path, building your own AI tools rather than using consumer AI products, AI literacy, the future of LLMs and much more.  Links Mentioned in the Show: [Alex’s Free Course on DataCamp] Understanding Prompt EngineeringSunday SignalPrinciples by Ray Dalio: Life and WorkRelated Episode: [DataFramed AI Series #1] ChatGPT and the OpenAI Developer EcosystemRewatch sessions from RADAR: The Analytics Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

In today's AI landscape, organizations are actively exploring how to seamlessly embed AI into their products, systems, processes, and workflows. The success of ChatGPT stands as a testament to this. Its success is not solely due to the performance of the underlying model; a significant part of its appeal lies in its human-centered user experience, particularly its chat interface. Beyond the foundational skills, infrastructure, and tools, it's clear that great design is a crucial ingredient in building memorable AI experiences. How do you build human-centered AI experiences? What is the role of design in driving successful AI implementations? How can data leaders and practitioners adopt a design lens when building with AI? Here to answer these questions is Haris Butt, Head of Product Design at ClickUp. ClickUp is a project management tool that's been making a big bet on AI, and Haris plays a key role in shaping how AI is embedded within the platform. Throughout the episode, Adel & Haris spoke about the role of design in driving human-centered AI experiences, the iterative process of designing with large language models, how to design AI experiences that promote trust, how designing for AI differs from traditional software, whether good design will ultimately end up killing prompt engineering, and a lot more.

For the past few years, we've seen the importance of data literacy and why organizations must invest in a data-driven culture, mindset, and skillset. However, as generative AI tools like ChatGPT have risen to prominence in the past year, AI literacy has never been more important. But how do we begin to approach AI literacy? Is it an extension of data literacy, a complement, or a new paradigm altogether? How should you get started on your AI literacy ambitions?  Cindi Howson is the Chief Data Strategy Officer at ThoughtSpot and host of The Data Chief podcast. Cindi is a data analytics, AI, and BI thought leader and an expert with a flair for bridging business needs with technology. As Chief Data Strategy Officer at ThoughtSpot, she advises top clients on data strategy and best practices to become data-driven, speaks internationally on top trends such as AI ethics, and influences ThoughtSpot’s product strategy.

Cindi was previously a Gartner Research Vice President, the lead author for the data and analytics maturity model and analytics and BI Magic Quadrant, and a popular keynote speaker. She introduced new research in data and AI for good, NLP/BI Search, and augmented analytics, bringing both BI bake-offs and innovation panels to Gartner globally. She’s frequently quoted in MIT, Harvard Business Review, and Information Week. She is rated a top 12 influencer in big data and analytics by Analytics Insight, Onalytca, Solutions Review, and Humans of Data.

In the episode, Cindi and Adel discuss how generative AI accelerates an organization’s data literacy, how leaders can think beyond data literacy and start to think about AI literacy, the importance of responsible use of AI, how to best communicate the value of AI within your organization, what generative AI means for data teams, AI use-cases in the data space, the psychological barriers blocking AI adoption, and much more. 

Links Mentioned in the Show: The Data Chief Podcast  ThoughtSpot Sage  BloombergGPT  Radar: Data & AI Literacy Course: AI Ethics  Course: Generative AI Concepts Course: Implementing AI Solutions in Business 

'Software is eating the world’ is a truism coined by Mark Andreesen, General Partner at Andreesen Horowitz. This was especially evident during the shift from analog mediums to digital at the turn of the century. Software companies have essentially usurped and replaced their non-digital predecessors. Amazon was the largest bookseller, Netflix was the largest movie "rental" service, Spotify or Apple were the largest music providers. Today, AI is starting to eat the world. However, we are still at the early start of the AI revolution, with AI set to become embedded in almost every piece of software we interact with. An AI ecosystem that touches every aspect of our lives is what today’s guest describes as ‘Ambient AI’. But what can we expect from this ramp up to Ambient AI? How will it change the way we work? What do we need to be mindful of as we develop this technology? Daniel Jeffries is the Managing Director of the AI Infrastructure Alliance and former CIO at Stability AI, the company responsible for Stable Diffusion, the popular open-source image generation model. He’s also an author, engineer, futurist, pro blogger and he’s given talks all over the world on AI and cryptographic platforms. In the episode, Adel and Daniel discuss how to define ambient AI, how our relationship with work will evolve as we become more reliant on AI, what the AI ecosystem is missing to rapidly scale adoption, why we need to accelerate the maturity of the open source AI ecosystem, how AI existential risk discourse takes away focus from real AI risk, and a lot lot more.

Links Mentioned in the Show Daniel’s Writing on MediumDaniel’s SubstackAI Infrastructure AllianceStability AIFrancois CholletRed Pajama DatasetRun AIWill Superintelligent AI End the World? By Eliezer Yudkowsky Nick Bostrom’s Paper Clip MaximizerThe pessimist archive [Course] Introduction to ChatGPT[Course] Implementing AI Solutions in Business