talk-data.com talk-data.com

Topic

LLM

Large Language Models (LLM)

nlp ai machine_learning

56

tagged

Activity Trend

158 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: DataFramed ×

For the past few years, we've seen the importance of data literacy and why organizations must invest in a data-driven culture, mindset, and skillset. However, as generative AI tools like ChatGPT have risen to prominence in the past year, AI literacy has never been more important. But how do we begin to approach AI literacy? Is it an extension of data literacy, a complement, or a new paradigm altogether? How should you get started on your AI literacy ambitions?  Cindi Howson is the Chief Data Strategy Officer at ThoughtSpot and host of The Data Chief podcast. Cindi is a data analytics, AI, and BI thought leader and an expert with a flair for bridging business needs with technology. As Chief Data Strategy Officer at ThoughtSpot, she advises top clients on data strategy and best practices to become data-driven, speaks internationally on top trends such as AI ethics, and influences ThoughtSpot’s product strategy.

Cindi was previously a Gartner Research Vice President, the lead author for the data and analytics maturity model and analytics and BI Magic Quadrant, and a popular keynote speaker. She introduced new research in data and AI for good, NLP/BI Search, and augmented analytics, bringing both BI bake-offs and innovation panels to Gartner globally. She’s frequently quoted in MIT, Harvard Business Review, and Information Week. She is rated a top 12 influencer in big data and analytics by Analytics Insight, Onalytca, Solutions Review, and Humans of Data.

In the episode, Cindi and Adel discuss how generative AI accelerates an organization’s data literacy, how leaders can think beyond data literacy and start to think about AI literacy, the importance of responsible use of AI, how to best communicate the value of AI within your organization, what generative AI means for data teams, AI use-cases in the data space, the psychological barriers blocking AI adoption, and much more. 

Links Mentioned in the Show: The Data Chief Podcast  ThoughtSpot Sage  BloombergGPT  Radar: Data & AI Literacy Course: AI Ethics  Course: Generative AI Concepts Course: Implementing AI Solutions in Business 

Generative AI is here to stay—even in the 8 months since the public release of ChatGPT, there are an abundance of AI tools to help make us more productive at work and ease the stress of planning and execution of our daily lives among other things.  Already, many of us are wondering what is to come in the next 8 months, the next year, and the next decade of AI’s evolution. In the grand scheme of things, this really is just the beginning. But what should we expect in this Cambrian explosion of technology? What are the use cases being developed behind the scenes? What do we need to be mindful of when training the next generations of AI? Can we combine multiple LLMs to get better results? Bal Heroor is CEO and Principal at Mactores and has led over 150 business transformations driven by analytics and cutting-edge technology. His team at Mactores are researching and building AI, AR/VR, and Quantum computing solutions for business to gain a competitive advantage. Bal is also the Co-Founder of Aedeon—the first hyper-scale Marketplace for Data Analytics and AI talent. In the episode, Richie and Bal explore common use cases for generative AI, how it's evolving to solve enterprise problems, challenges of data governance and the importance of explainable AI, the challenges of tracking the lineage of AI and data in large organizations. Bal also touches on the shift from general-purpose generative AI models to more specialized models, fascinating use cases in the manufacturing industry, what to consider when adopting AI solutions in business, and much more. Links mentioned in the show: PulsarTrifactaAWS Clarify[Course] Introduction to ChatGPT[Course] Implementing AI Solutions in Business[Course] Generative AI Concepts

'Software is eating the world’ is a truism coined by Mark Andreesen, General Partner at Andreesen Horowitz. This was especially evident during the shift from analog mediums to digital at the turn of the century. Software companies have essentially usurped and replaced their non-digital predecessors. Amazon was the largest bookseller, Netflix was the largest movie "rental" service, Spotify or Apple were the largest music providers. Today, AI is starting to eat the world. However, we are still at the early start of the AI revolution, with AI set to become embedded in almost every piece of software we interact with. An AI ecosystem that touches every aspect of our lives is what today’s guest describes as ‘Ambient AI’. But what can we expect from this ramp up to Ambient AI? How will it change the way we work? What do we need to be mindful of as we develop this technology? Daniel Jeffries is the Managing Director of the AI Infrastructure Alliance and former CIO at Stability AI, the company responsible for Stable Diffusion, the popular open-source image generation model. He’s also an author, engineer, futurist, pro blogger and he’s given talks all over the world on AI and cryptographic platforms. In the episode, Adel and Daniel discuss how to define ambient AI, how our relationship with work will evolve as we become more reliant on AI, what the AI ecosystem is missing to rapidly scale adoption, why we need to accelerate the maturity of the open source AI ecosystem, how AI existential risk discourse takes away focus from real AI risk, and a lot lot more.

Links Mentioned in the Show Daniel’s Writing on MediumDaniel’s SubstackAI Infrastructure AllianceStability AIFrancois CholletRed Pajama DatasetRun AIWill Superintelligent AI End the World? By Eliezer Yudkowsky Nick Bostrom’s Paper Clip MaximizerThe pessimist archive [Course] Introduction to ChatGPT[Course] Implementing AI Solutions in Business

In a time when AI is evolving at breakneck speeds, taking a step back and gaining a bird's-eye view of the evolving AI ecosystem is paramount to understanding where the field is headed. With this bird's-eye view come a series of questions. Which trends will dominate generative AI in the foreseeable future? What are the truly transformative use-cases that will reshape our business landscape? What does the skills economy look like in an age of hyper intelligence? Enter Joanne Chen, General Partner at Foundation Capital. Joanne invests in early-stage AI-first B2B applications and data platforms that are the building blocks of the automated enterprise. She has shared her learnings as a featured speaker at conferences, including CES, SXSW, WebSummit, and has spoken about the impact of AI on society in her TED talk titled "Confessions of an AI Investor." Joanne began her career as an engineer at Cisco Systems and later co-founded a mobile gaming company. She also spent many years working on Wall Street at Jefferies & Company, helping tech companies go through the IPO and M&A processes, and at Probitas Partners, advising venture firms on their fundraising process. Throughout the episode, Richie and Joanne cover emerging trends in generative AI, business use cases that have emerged in the past year since the advent of tools like ChatGPT, the role of AI in augmenting work, the ever-changing job market and AI's impact on it, as well as actionable insights for individuals and organizations wanting to adopt AI. Links mentioned in the show: JasperAIAnyScaleCerebras[Course] Introduction to ChatGPT[Course] Implementing AI Solutions in Business[Course] Generative AI Concepts

Data and AI are advancing at an unprecedented rate—and while the jury is still out on achieving superintelligent AI systems, the idea of artificial intelligence that can understand and learn anything—an “artificial general intelligence”—is becoming more likely. What does the rise of AI mean for the future of software and work as we know it? How will AI help reinvent most of the ways we interact with the digital and physical world? Bob Muglia is a data technology investor and business executive, former CEO of Snowflake, and past president of Microsoft's Server and Tools Division. As a leader in data & AI, Bob focuses on how innovation and ethical values can merge to shape the data economy's future in the era of AI. He serves as a board director for emerging companies that seek to maximize the power of data to help solve some of the world's most challenging problems. In the episode, Richie and Bob explore the current era of AI and what it means for the future of software. Throughout the episode, they discuss how to approach driving value with large language models, the main challenges organizations face when deploying AI systems, the risks, and rewards of fine-tuning LLMs for specific use cases, what the next 12 to 18 months hold for the burgeoning AI ecosystem, the likelihood of superintelligence within our lifetimes, and more. Links from the show: The Datapreneurs by Bob Muglia and Steve HammThe Singularity is Near by Ray KurzweilIsaac AsimovSnowflakePineconeDocugamiOpenAI/GPT-4The Modern Data Stack

About 10 years ago, Thomas Davenport & DJ Patil published the article "Data Scientist: The Sexiest Job of the 21st Century" in the Harvard Business Review. In this piece, they described the bourgeoning role of the data scientist and what it will mean for organizations and individuals in the coming decade. As time has passed, data science has become increasingly institutionalized. Once seen as a luxury, it is now deemed a necessity in every modern boardroom. Moreover as technologies like AI and systems like ChatGPT keep astonishing us with their capabilities in handling data science tasks, it raises a pertinent question: Is Data Science Still the Sexiest Job of the 21st Century? In this episode, we invited Thomas Davenport on the show to share his perspective on where data science & AI are at today, and where they are headed. Thomas Davenport is the President’s Distinguished Professor of Information Technology and Management at Babson College, the co-founder of the International Institute for Analytics, a Fellow of the MIT Initiative for the Digital Economy, and a Senior Advisor to Deloitte Analytics. He has written or edited twenty books and over 250 print or digital articles for Harvard Business Review (HBR), Sloan Management Review, the Financial Times, and many other publications. One of HBR’s most frequently published authors, Thomas has been at the forefront of the Process Innovation, Knowledge Management, and Analytics and Big Data movements. He pioneered the concept of “competing on analytics” with his 2006 Harvard Business Review article and his 2007 book by the same name. Since then, he has continued to provide cutting-edge insights on how companies can use analytics and big data to their advantage, and then on artificial intelligence. Throughout the episode, we discuss how data science has changed since he first published his article, how it has become more institutionalized, how data leaders can drive value with data science, the importance of data culture, his views on AI and where he thinks its going, and a lot more. Links from the Show: Working with AI by Thomas Davenport The AI Advantage: How to Put the Artificial Intelligence Revolution to Work by Thomas Davenport Harvard Business Review New Vantage Partners CCC Intelligent Solutions Radar AI

Although many have been cognizant of AI’s value in recent months, the further back we look, the more exclusive this group of people becomes. In our latest AI-series episodes of DataFramed, we gain insight from an expert who has been part of the industry for 40 years. Joaquin Marques, Founder and Principal Data Scientist at Kanayma LLC has been working in AI since 1983. With experience at major tech companies like IBM, Verizon, and Oracle, Joaquin's knowledge of AI is vast. Today, he leads an AI consultancy, Kanayma, where he creates innovative AI products. Throughout the episode, Joaquin shares his insights on AI's development over the years, its current state, and future possibilities. Joaquin also shares the exciting projects they've worked on at Kanayma as well as what to consider when building AI products, and how ChatGPT is making chatbots better. Joaquin goes beyond providing insight into the space, encouraging listeners to think about the practical consequences of implementing AI, with Joaquin sharing the finer technical details of many of the solutions he’s helped build. Joaquin also shares many of the thought processes that have helped him move forward when building AI products, providing context on many practical applications of AI, both from his past and the bleeding edge of today.   The discussion examines the complexities of artificial intelligence, from the perspective of someone that has been focused on this technology for more than most. Tune in for guidance on how to build AI into your own company's products.

With the advances in AI products and the explosion of ChatGPT in recent months, it is becoming easier to imagine a world where AI and humans work seamlessly together—revolutionizing how we solve complex problems and transform our daily lives. This is especially the case for data professionals. In this episode of our AI series, we speak to Sarah Schlobohm, Head of AI at Kubrick Group. Dr. Schlobohm leads the training of the next generation of machine learning engineers. With a background in finance and consulting, Sarah has a deep understanding of the intersection between business strategy, data science, and AI. Prior to her work in finance, Sarah became a chartered accountant, where she honed her skills in financial analysis and strategy. Sarah worked for one of the world's largest banks, where she used data science to fight financial crime, making significant contributions to the industry's efforts to combat money laundering and other illicit activities. Sarah shares her extensive knowledge on incorporating AI within data teams for maximum impact, covering a wide array of AI-related topics, including upskilling, productivity, and communication, to help data professionals understand how to integrate generative AI effectively in their daily work. Throughout the episode, Sarah explores the challenges and risks of AI integration, touching on the balance between privacy and utility. She highlights the risks data teams can avoid when using AI products and how to approach using AI products the right way. She also covers how different roles within a data team might make use of generative AI, as well as how it might effect coding ability going forward. Sarah also shares use cases for those in non-data teams, such as marketing, while also highlighting what to consider when using outputs from GPT models. Sarah shares the impact chatbots might have on education calling attention to the power of AI tutors in schools. Sarah encourages people to start using AI now, considering the barrier to entry is so low, and how that might not be the case going forward. From automating mundane tasks to enabling human-AI collaboration that makes work more enjoyable, Sarah underscores the transformative power of AI in shaping the future of humanity. Whether you're an AI enthusiast, data professional, or someoone with an interest in either this episode will provide you with a deeper understanding of the practical aspects of AI implementation.

With the advent of any new technology that promises to make humans lives easier, replacing concious actions with automation, there is always backlash. People are often aware of the displacement of jobs, and often, it is viewed in a negative light. But how do we try to change the collective understanding to one of hope and excitement? What use cases can be shared that will change the opinion of those that are weary of AI?  Noelle Silver Russell is the Global AI Solutions & Generative AI & LLM Industry Lead at Accenture, responsible for enterprise-scale industry playbooks for generative AI and LLMs. In this episode of our AI series, Noelle discusses how to prioritize ChatGPT use cases by focusing on the different aspects of value creation that GPT models can bring to individuals and organizations. She addresses common misconceptions surrounding ChatGPT and AI in general, emphasizing the importance of understanding their potential benefits and selecting use cases that maximize positive impact, foster innovation, and contribute to job creation. Noelle draws parallels between the fast-moving AI projects today and the launch of Amazon Alexa, which she worked on, and points out that many of the discussions being raised today were also talked about 10 years ago. She discusses how companies can now use AI to focus both on business efficiencies and customer experience, no longer having to settle for a trade-off between the two. Noelle explains the best way for companies to approach adding GPT tools into their processes, which focusses on taking a holistic view to implementation. She also recommends use-cases for companies that are just beginning to use AI, as well as the challenges they might face when deploying models into production, and how they can mitigate them.  On the topic of the displacement of jobs, Noelle draws parallels from when Alexa was launched, and how it faced similar criticisms, digging into the fear that people have around new technology, which could be transformed into enthusiasm. Noelle suggests that there is a burden on leadership within organizations to create a culture where people are excited to use AI tools, rather than feeling threatened by them.

ChatGPT has leaped into the forefront of our lives—everyone from students to multinational organizations are seeing value in adding a chat interface to an LLM. But OpenAI has been concentrating on this for years, steadily developing one of the most viral digital products this century. In this episode of our AI series, we sit down with Logan Kilpatrick. Logan currently leads developer relations at OpenAI, supporting developers building with DALL-E, the OpenAI API, and ChatGPT. Logan takes us through OpenAI’s products, API, and models, and provides insights into the many use cases of ChatGPT.  Logan provides fascinating information on ChatGPT’s plugins and how they can be used to build agents that help us in a variety of contexts. He also discusses the future integration of LLMs into our daily lives and how it will add structure to the unstructured nature and difficult-to-leverage data we generate and interact with on a daily basis. Logan also touches on the powerful image input features in GPT4, how it can help those with partial sight to improve their quality of life, and how it can be used for various other use cases. Throughout the episode, we unpack the need for collaboration and innovation, due to ChatGPT becoming more powerful when integrated with other pieces of software. Covering key discussion points with regard to AI tools currently, in particular, what could be built in-house by OpenAI and what could be built in the public domain. Logan also discusses the ecosystem forming around ChatGPT and how it will all become connected going forward. Finally, Logan shares tips for getting better responses from ChatGPT and the things to consider when integrating it into your organization’s product.  This episode provides a deep dive into the world of GPT models from within the eye of the storm, providing valuable insights to those interested in AI and its practical applications in our daily lives.

The concept of literate programming, or the idea of programming in a document, was first introduced in 1984 by Donald Knuth. And as of today, notebooks are now the defacto tool for doing data science work. So as the data tooling space continues to evolve at breakneck speed, what are the possible directions the data science notebook can take?  In this episode of DataFramed, we talk with Dr. Jodie Burchell, Data Science Developer Advocate at JetBrains, to find out how data science notebooks evolved into what they are today, what her predictions are for the future of notebooks and data science, and how generative AI will impact data teams going forward.  Jodie completed a Ph.D. in clinical psychology and a postdoc in biostatistics before transitioning into data science. She has since worked for 7 years as a data scientist, developing products ranging from recommendation systems to audience profiling. She is also a prolific content creator in the data science community. Throughout the episode, Jodie discusses the evolution of data science notebooks over the last few years, noting how the move to remote-based notebooks has allowed for the seamless development of more complex models straight from the notebook environment. Jodie and Adel’s conversation also covers tooling challenges that have led to modern IDEs and notebooks, with Jodie highlighting the importance of good database tooling and visibility. She shares how data science notebooks have evolved to help democratize data for the wider organization, the tradeoffs between engineering-led approaches to tooling compared to data science approaches, what generative AI means for the data profession, her predictions for data science, and more. Tune in to this episode to learn more about the evolution of data science notebooks and the challenges and opportunities facing the data science community today. Links to mentioned in the show: DataCamp Workspace: An-in Browser Notebook IDEJetBrains' DataloreNick Cave on ChatGPT song lyrics imitating his styleGitHub Copilot  More on the topic: The Past, Present, And Future of The Data Science NotebookHow to Use Jupyter Notebooks: The Ultimate Guide

Data leaders play a critical role in driving innovation and growth in various industries, and this is particularly true in highly regulated industries such as aviation. In such industries, data leaders face unique challenges and opportunities, working to balance the need for innovation with strict regulatory requirements. This week’s guest is Derek Cedillo, who has 27 years of experience working in Data and Analytics at GE Aerospace. Derek currently works as a Senior Manager for GE Aerospace’s Remote Monitoring and Diagnostics division, having previously worked as the Senior Director for Data Science and Analytics. In the episode, Derek shares the key components to successfully managing a Data Science program within a large and highly regulated organization. He also shares his insights on how to standardize data science planning across various projects and how to get a Data Scientists to think and work in an agile manner. We hear about ideal data team structures, how to approach hiring, and what skills to look for in new hires.  The conversation also touches on what responsibility Data Leaders have within organizations, championing data-driven decisions and strategy, as well as the complexity Data Leaders face in highly regulated industries. When it comes to solving problems that provide value for the business, engagement and transparency are key aspects. Derek shares how to ensure that expectations are met through clear and frank conversations with executives that try to align expectations between management and Data Science teams. 

Finally, you'll learn about validation frameworks, best practices for teams in less regulated industries, what trends to look out for in 2023 and how ChatGPT is changing how executives define their expectations from Data Science teams. 

Links to mentioned in the show: The Checklist Manifesto by Atul Gawande Team of Teams by General Stanley McChrystal The Harvard Data Science Review Podcast

Relevant Links from DataCamp: Article: Storytelling for More Impactful Data Science Course: Data Communication Concepts Course: Data-Driven Decision-Making for Business

Throughout 2022, there was an explosion in generative AI for images and text. GPT-3, DALLE-2, pointed us towards an AI-driven future. Recently, ChatGPT has taken the (data) world by storm — prompting many questions over how generative AI can be used in day to day activities. With the incredible amount of hype surrounding these new tools, we wanted to have a discussion grounded in how these tools are being operationalized today. Enter Scott Downes. Scott is the CTO of Invisible Technologies, a process automation platform that uses GPT-3 and other generative text technologies. Scott joins the show to talk about how organizations and data professionals can maximize the potential of these tools and how AI and humans can work together in a complementary fashion to optimize workflows, reduce time-intensive, tedious tasks, and do higher quality work. Scott has a decade of experience in technology, product engineering, and technical leadership, making a veteran in training and mentoring employees across the organization, whether their roles are more creative or more technical. Throughout the conversation, we talk about what Invisible Technologies uses GPT-3 to optimize workflows, a brief overview of GPT-3 and its use cases for working with text, how GPT-3 helps companies scale their operations, the promises of tools ChatGPT, how AI analysis and human review can work together to save lives, and much more.

2022 was an incredible year for Generative AI. From text generation models like GPT-3 to the rising popularity of AI image generation tools, generative AI has rapidly evolved over the last few years in both its popularity and its use cases.

Martin Musiol joins the show this week to explore the business use cases of generative AI, and how it will continue to impact the way the society interacts with data. Martin is a Data Science Manager at IBM, as well as Co-Founder and an instructor at Generative AI, teaching people to develop their own AI that generates images, videos, music, text and other data. Martin has also been a keynote speaker at various events, such as Codemotion Milan. Having discovered his passion for AI in 2012, Martin has turned that passion into his expertise, becoming a thought leader in AI and machine learning space.

In this episode, we talk about the state of generative AI today, privacy and intellectual property concerns, the strongest use cases for generative AI, what the future holds, and much more.

In 2020, OpenAI launched GPT-3, a large language AI model that is demonstrating the potential to radically change how we interact with software, and open up a completely new paradigm for cognitive software applications.

Today’s episode features Sandra Kublik and Shubham Saboo, authors of GPT-3: Building Innovative NLP Products Using Large Language Models. We discuss what makes GPT-3 unique, transformative use-cases it has ushered in, the technology powering GPT-3, its risks and limitations, whether scaling models is the path to “Artificial General Intelligence”, and more.

Announcement

For the next seven days, DataCamp Premium and DataCamp for Teams are free. Gain free access by following going here.