talk-data.com talk-data.com

Event

DataFramed

2019-04-01 – 2025-12-01 Podcasts Visit website ↗

Activities tracked

56

Welcome to DataFramed, a weekly podcast exploring how artificial intelligence and data are changing the world around us. On this show, we invite data & AI leaders at the forefront of the data revolution to share their insights and experiences into how they lead the charge in this era of AI. Whether you're a beginner looking to gain insights into a career in data & AI, a practitioner needing to stay up-to-date on the latest tools and trends, or a leader looking to transform how your organization uses data & AI, there's something here for everyone.

Join co-hosts Adel Nehme and Richie Cotton as they delve into the stories and ideas that are shaping the future of data. Subscribe to the show and tune in to the latest episode on the feed below.

Filtering by: LLM ×

Sessions & talks

Showing 26–50 of 56 · Newest first

Search within this event →

#227 DataFramed x Analytics On Fire: Riding the AI Hype Cycle with Mico Yuk, Co-Founder at Data Storytelling Academy

2024-07-18 Listen
podcast_episode
Mico Yuk (Data Storytelling Academy) , Richie (DataCamp)

This special episode of DataFramed was made in collaboration with Analytics on Fire! Nowadays, the hype around generative AI is only the tip of the iceberg. There are so many ideas being touted as the next big thing that it’s difficult to keep up. More importantly, it’s challenging to discern which ideas will become the next ChatGPT and which will end up like the next NFT. How do we cut through the noise? Mico Yuk is the Community Manager at Acryl Data and Co-Founder at Data Storytelling Academy. Mico is also an SAP Mentor Alumni, and the Founder of the popular weblog, Everything Xcelsius and the 'Xcelsius Gurus’ Network. She was named one of the Top 50 Analytics Bloggers to follow, as-well-as a high-regarded BI influencer and sought after global keynote speaker in the Analytics ecosystem.  In the episode, Richie and Mico explore AI and productivity at work, the future of work and AI, GenAI and data roles, AI for training and learning, training at scale, decision intelligence, soft skills for data professionals, genAI hype and much more.  Links Mentioned in the Show: Analytics on Fire PodcastData Visualization for Dummies by Mico Yuk and Stephanie DiamondConnect with Miko[Skill Track] AI FundamentalsRelated Episode: What to Expect from AI in 2024 with Craig S. Smith, Host of the Eye on A.I PodcastRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

#226 Creating Custom LLMs with Vincent Granville, Founder, CEO & Chief Al Scientist at GenAltechLab.com

2024-07-15 Listen
podcast_episode
Vincent Granville (GenAltechLab.com) , Richie (DataCamp)

Despite GPT, Claude, Gemini, LLama and the other host of LLMs that we have access to, a variety of organizations are still exploring their options when it comes to custom LLMs. Logging in to ChatGPT is easy enough, and so is creating a 'custom' openAI GPT, but what does it take to create a truly custom LLM? When and why might this be useful, and will it be worth the effort? Vincent Granville is a pioneer in the AI and machine learning space, he is Co-Founder of Data Science Central, Founder of MLTechniques.com, former VC-funded executive, author, and patent owner. Vincent’s corporate experience includes Visa, Wells Fargo, eBay, NBC, Microsoft, and CNET. He is also a former post-doc at Cambridge University and the National Institute of Statistical Sciences. Vincent has published in the Journal of Number Theory, Journal of the Royal Statistical Society, and IEEE Transactions on Pattern Analysis and Machine Intelligence. He is the author of multiple books, including “Synthetic Data and Generative AI”. In the episode, Richie and Vincent explore why you might want to create a custom LLM including issues with standard LLMs and benefits of custom LLMs, the development and features of custom LLMs, architecture and technical details, corporate use cases, technical innovations, ethics and legal considerations, and much more.  Links Mentioned in the Show: Read Articles by VincentSynthetic Data and Generative AI by Vincent GranvilleConnect with Vincent on Linkedin[Course] Developing LLM Applications with LangChainRelated Episode: The Power of Vector Databases and Semantic Search with Elan Dekel, VP of Product at PineconeRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

#221 [Radar Recap] The Future of Programming: Accelerating Coding Workflows with LLMs

2024-07-02 Listen
podcast_episode
Ryan J. Salva (GitHub) , Jordan Tigani (Motherduck) , Michele Catasta (Replit)

From data science to software engineering, Large Language Models (LLMs) have emerged as pivotal tools in shaping the future of programming. In this session, Michele Catasta, VP of AI at Replit, Jordan Tigani, CEO at Motherduck, and Ryan J. Salva, VP of Product at GitHub, will explore practical applications of LLMs in coding workflows, how to best approach integrating AI into the workflows of data teams, what the future holds for AI-assisted coding, and a lot more. Links Mentioned in the Show: Rewatch Session from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

#214 Learning & Memory, For Brains & AI, with Kim Stachenfeld, Senior Research Scientist at Google DeepMind

2024-06-10 Listen
podcast_episode
Richie (DataCamp) , Kim Stachenfeld (Google DeepMind)

Memory, the foundation of human intelligence, is still one of the most complex and mysterious aspects of the brain. Despite decades of research, we've only scratched the surface of understanding how our memories are formed, stored, and retrieved. But what if AI could help us crack the code on memory? How might AI be the key to unlocking problems that have evaded human cognition for so long? Kim Stachenfeld is a Senior Research Scientist at Google DeepMind in NYC and Affiliate Faculty at the Center for Theoretical Neuroscience at Columbia University.  Her research covers topics in Neuroscience and AI. On the Neuroscience side, she study how animals build and use models of their world that support memory and prediction. On the Machine Learning side, she works on implementing these cognitive functions in deep learning models. Kim’s work has been featured in The Atlantic, Quanta Magazine, Nautilus, and MIT Technology Review. In 2019, she was named one of MIT Tech Review’s Innovators under 35 for her work on predictive representations in hippocampus.  In the episode, Richie and Kim explore her work on Google Gemini, the importance of customizability in AI models, the need for flexibility and adaptability in AI models, retrieval databases and how they improve AI response accuracy, AI-driven science, the importance of augmenting human capabilities with AI and the challenges associated with this goal, the intersection of AI, neuroscience and memory and much more.  Links Mentioned in the Show: DeepMindAlphaFoldDr James Whittington - A unifying framework for frontal and temporal representation of memoryPaper - Language models show human-like content effects onreasoning tasksKim’s Website[Course] Artificial Intelligence (AI) StrategyRelated Episode: Making Better Decisions using Data & AI with Cassie Kozyrkov, Google's First Chief Decision ScientistSign up to RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

#196 The Art of Prompt Engineering with Alex Banks, Founder and Educator, Sunday Signal

2024-04-08 Listen
podcast_episode
Alex Banks (Sunday Signal) , Adel (DataFramed)

Since the launch of ChatGPT, one of the trending terms outside of ChatGPT itself has been prompt engineering. This act of carefully crafting your instructions is treated as alchemy by some and science by others. So what makes an effective prompt? Alex Banks has been building and scaling AI products since 2021. He writes Sunday Signal, a newsletter offering a blend of AI advancements and broader thought-provoking insights. His expertise extends to social media platforms on X/Twitter and LinkedIn, where he educates a diverse audience on leveraging AI to enhance productivity and transform daily life. In the episode, Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, strategies for better prompting, chain of thought prompting, prompt engineering as a skill and career path, building your own AI tools rather than using consumer AI products, AI literacy, the future of LLMs and much more.  Links Mentioned in the Show: [Alex’s Free Course on DataCamp] Understanding Prompt EngineeringSunday SignalPrinciples by Ray Dalio: Life and WorkRelated Episode: [DataFramed AI Series #1] ChatGPT and the OpenAI Developer EcosystemRewatch sessions from RADAR: The Analytics Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

#187 The Power of Vector Databases and Semantic Search with Elan Dekel, VP of Product at Pinecone

2024-03-11 Listen
podcast_episode
Richie (DataCamp) , Elan Dekel (Pinecone)

Generative AI is fantastic but has a major problem: sometimes it "hallucinates", meaning it makes things up. In a business product like a chatbot, this can be disastrous. Vector databases like Pinecone are one of the solutions to mitigating the problem. Vector databases are a key component to any AI application, as well as things like enterprise search and document search. They have become an essential tool for every business, and with the rise in interest in AI in the last couple of years, the space is moving quickly. In this episode, you'll find out how to make use of vector databases, and find out about the latest developments at Pinecone. Elan Dekel is the VP of Product at Pinecone, where he oversees the development of the Pinecone vector database. He was previously Product Lead for Core Data Serving at Google, where he led teams working on the indexing systems to serve data for Google search, YouTube search, and Google Maps. Before that, he was Founder and CEO of Medico, which was acquired by Everyday Health. In the episode, RIchie and Elan explore LLMs, hallucination in generative models, vector databases and the best use-cases for them, semantic search, business applications of vector databases and semantic search, the tech stack for AI applications, cost considerations when investing in AI projects, emerging roles within the AI space, the future of vector databases and AI, and much more.   Links Mentioned in the Show: Pinecone CanopyPinecone ServerlessLlamaIndexLangchain[Code Along] Semantic Search with PineconeRelated Episode: Expanding the Scope of Generative AI in the Enterprise with Bal Heroor, CEO and Principal at MactoresSign up to RADAR: The Analytics Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

[AI and the Modern Data Stack] #184 Accelerating AI Workflows with Nuri Cankaya, VP of AI Marketing & La Tiffaney Santucci, AI Marketing Director at Intel

2024-02-22 Listen
podcast_episode
Nuri Cankaya (Intel) , Richie (DataCamp) , La Tiffaney Santucci (Intel)

We’ve heard so much about the value and capabilities of generative AI over the past year, and we’ve all become accustomed to the chat interfaces of our preferred models. One of the main concerns many of us have had has been privacy. Is OpenAI keeping the data and information I give to ChatGPT secure? One of the touted solutions to this problem is running LLMs locally on your own machine, but with the hardware cost that comes with it, running LLMs locally has not been possible for many of us. That might now be starting to change. Nuri Canyaka is VP of AI Marketing at Intel. Prior to Intel, Nuri spent 16 years at Microsoft, starting out as a Technical Evangelist, and leaving the organization as the Senior Director of Product Marketing. He ran the GTM team that helped generate adoption of GPT in Microsoft Azure products. La Tiffaney Santucci is Intel’s AI Marketing Director, specializing in their Edge and Client products. La Tiffaney has spent over a decade at Intel, focussing on partnerships with Dell, Google Amazon and Microsoft.  In the episode, Richie, Nuri and La Tiffaney explore AI’s impact on marketing analytics, the adoptions of AI in the enterprise, how AI is being integrated into existing products, the workflow for implementing AI into business processes and the challenges that come with it, the importance of edge AI for instant decision-making in uses-cases like self-driving cars, the emergence of AI engineering as a distinct field of work, the democratization of AI, what the state of AGI might look like in the near future and much more.  About the AI and the Modern Data Stack DataFramed Series This week we’re releasing 4 episodes focused on how AI is changing the modern data stack and the analytics profession at large. The modern data stack is often an ambiguous and all-encompassing term, so we intentionally wanted to cover the impact of AI on the modern data stack from different angles. Here’s what you can expect: Why the Future of AI in Data will be Weird with Benn Stancil, CTO at Mode & Field CTO at ThoughtSpot — Covering how AI will change analytics workflows and tools How Databricks is Transforming Data Warehousing and AI with Ari Kaplan, Head Evangelist & Robin Sutara, Field CTO at Databricks — Covering Databricks, data intelligence and how AI tools are changing data democratizationAdding AI to the Data Warehouse with Sridhar Ramaswamy, CEO at Snowflake — Covering Snowflake and its uses, how generative AI is changing the attitudes of leaders towards data, and how to improve your data managementAccelerating AI Workflows with Nuri Cankaya, VP of AI Marketing & La Tiffaney Santucci, AI Marketing Director at Intel — Covering AI’s impact on marketing analytics, how AI is being integrated into existing products, and the democratization of AI Links Mentioned in the Show: Intel OpenVINO™ toolkitIntel Developer Clouds for Accelerated ComputingAWS Re:Invent[Course] Implementing AI Solutions in BusinessRelated Episode: Intel CTO Steve Orrin on How Governments Can Navigate the Data & AI RevolutionSign up to a href="https://www.datacamp.com/radar-analytics-edition"...

#180 How AI is Changing Cybersecurity with Brian Murphy, CEO of ReliaQuest

2024-02-12 Listen
podcast_episode
Brian Murphy (ReliaQuest)

Just as many of us have been using generative AI tools to make us more productive at work, so have bad actors. Generative AI makes it much easier to create fake yet convincing text and images that can be used to deceive and harm. We’ve already seen lots of high-profile attempts to leverage AI in phishing campaigns, and this is putting more pressure on cybersecurity teams to get ahead of the curve and combat these new forms of threats. However, AI is also helping those that work in cybersec to be more productive and better equip themselves to create new forms of defense and offense.  Brian Murphy is a founder, CEO, entrepreneur and investor. He founded and leads ReliaQuest, the force multiplier of security operations and one of the largest and fastest-growing companies in the global cybersecurity market. ReliaQuest increases visibility, reduces complexity, and manages risk with its cloud-native security operations platform, GreyMatter. Murphy grew ReliaQuest from a boot-strapped startup to a high-growth unicorn with a valuation of over $1 billion, more than 1,000 team members, and more than $350 million in growth equity with firms such as FTV Capital and KKR Growth.  In the full episode, Adel and Brian cover the evolution of cybersecurity tools, the challenges faced by cybersecurity teams, types of cyber threats, how generative AI can be used both defensively and offensively in cybersecurity, how generative AI tools are making cybersecurity professionals more productive, the evolving role of cybersecurity professionals, the security implications of deploying AI models, the regulatory landscape for AI in cybersecurity and much more.  Links Mentioned in the Show: ReliaQuestReliaQuest BlogIBM finds that ChatGPT can generate phishing emails nearly as convincing as a humanInformation Sharing and Analysis Centers (ISACs)[Course] Introduction to Data SecurityRelated episode: Data Security in the Age of AI with Bart Vandekerckhove, Co-founder at Raito New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

#176 Data Trends & Predictions 2024 with DataCamp's CEO & COO, Jo Cornelissen & Martijn Theuwissen

2024-01-25 Listen
podcast_episode
Martijn Theuwissen (DataCamp) , Richie (DataCamp) , Jo Cornelissen (DataCamp)

2023 was a huge year for data and AI. Everyone who didn't live under a rock started using generative AI, and much was teased by companies like OpenAI, Microsoft, Google and Meta. We saw the millions of different use cases generative AI could be applied to, as well as the iterations we could expect from the AI space, such as connected multi-modal models, LLMs in mobile devices and formal legislation. But what has this meant for DataCamp? What will we do to facilitate learners and organizations around the world in staying ahead of the curve? In this special episode of DataFramed, we sit down with DataCamp Co-Founders Jo Cornelissen, Chief Executive Officer, and Martijn Theuwissen, Chief Operating Officer, to discuss their expectations for data & AI in 2024. In the episode, Richie, Jo and Martijn discuss generative AI's mainstream impact in 2023, the broad use cases of generative AI and skills required to utilize it effectively, trends in AI and software development, how the programming languages for data are evolving, new roles in data & AI, the job market and skill development in data science and their predictions for 2024. Links Mentioned in the Show: Free course - Become an AI DeveloperWebinar - Data & AI Trends & Predictions 2024 Courses: Artificial Intelligence (AI) StrategyGenerative AI for BusinessImplementing AI Solutions in BusinessAI Ethics

#168 Causal AI in Business with Paul Hünermund, Assistant Professor, Copenhagen Business School

2023-12-18 Listen
podcast_episode
Paul Hünermund (Copenhagen Business School) , Richie (DataCamp)

There are a few caveats to using generative AI tools, those caveats have led to a few tips that have quickly become second nature to those that use LLMs like ChatGPT. The main one being: have the domain knowledge to validate the output in order to avoid hallucinations. Hallucinations are one of the weak spots for LLMs due to the nature of the way they are built, as they are trained to correlate data in order to predict what might come next in an incomplete sequence. Does this mean that we’ll always have to be wary of the output of AI products, with the expectation that there is no intelligent decision-making going on under the hood? Far from it. Causal AI is bound by reason—rather than looking at correlation, these exciting systems are able to focus on the underlying causal mechanisms and relationships. As the AI field rapidly evolves, Causal AI is an area of research that is likely to have a huge impact on a huge number of industries and problems.  Paul Hünermund is an Assistant Professor of Strategy and Innovation at Copenhagen Business School. In his research, Dr. Hünermund studies how firms can leverage new technologies in the space of machine learning and artificial intelligence such as Causal AI for value creation and competitive advantage. His work explores the potential for biases in organizational decision-making and ways for managers to counter them. It thereby sheds light on the origins of effective business strategies in markets characterized by a high degree of technological competition and the resulting implications for economic growth and environmental sustainability.  His work has been published in The Journal of Management Studies, the Econometrics Journal, Research Policy, Journal of Product Innovation Management, International Journal of Industrial Organization, MIT Sloan Management Review, and Harvard Business Review, among others.  In the full episode, Richie and Paul explore Causal AI, its differences when compared to other forms of AI, use cases of Causal AI in fields like drug development, marketing, manufacturing, and defense. They also discuss how Causal AI contributes to better decision-making, the role of domain experts in getting accurate results, what happens in the early stages of Causal AI adoption, exciting new developments within the Causal AI space and much more.  Links Mentioned in the Show: Causal Data Science in BusinessCausal AI by causaLensIntro to Causal AI Using the DoWhy Library in PythonLesson: Inference (causal) models

#167 What to Expect from AI in 2024 with Craig S. Smith, Host of the Eye on A.I Podcast

2023-12-11 Listen
podcast_episode
Craig S. Smith (The New York Times; Wall Street Journal) , Richie (DataCamp)

Over the past year, we’ve seen a full hype cycle of hysteria and discourse surrounding generative AI. It almost seems difficult to think back to a time when no one had used ChatGPT. We are in the midst of the fourth industrial revolution, and technology is moving rapidly. Better performing and more capable models are being released at a stunning rate, and with the growing presence of multimodal AI, can we expect another whirlwind year that vastly changes the state of play within AI again? Who might be able to provide insight into what is to come in 2024? Craig S. Smith is an American journalist, former executive of The New York Times, and host of the podcast Eye on AI. Until January 2000, he wrote for The Wall Street Journal, most notably covering the rise of the religious movement Falun Gong in China. He has reported for the Times from more than 40 countries and has covered several conflicts, including the 2001 invasion of Afghanistan, the 2003 war in Iraq, and the 2006 Israeli-Lebanese war. He retired from the Times in 2018 and now writes about artificial intelligence for the Times and other publications. He was a special Government employee for the National Security Commission on Artificial Intelligence until the commission's end in October 2021.  In the episode, Richie and Craig explore the 2023 advancements in generative AI, such as GPT-4, and the evolving roles of companies like Anthropic and Meta, practical AI applications for research and image generation, challenges in large language models, the promising future of world models and AI agents, the societal impacts of AI, the issue of misinformation, computational constraints, and the importance of AI literacy in the job market, the transformative potential of AI in various sectors and much more.  Links Mentioned in the Show: Eye on AIWayveAnthropicCohereMidjourneyYann Lecun

#158 Building Human-Centered AI Experiences with Haris Butt, Head of Product Design at ClickUp

2023-10-09 Listen
podcast_episode
Adel (DataFramed) , Haris Butt (ClickUp)

In today's AI landscape, organizations are actively exploring how to seamlessly embed AI into their products, systems, processes, and workflows. The success of ChatGPT stands as a testament to this. Its success is not solely due to the performance of the underlying model; a significant part of its appeal lies in its human-centered user experience, particularly its chat interface. Beyond the foundational skills, infrastructure, and tools, it's clear that great design is a crucial ingredient in building memorable AI experiences. How do you build human-centered AI experiences? What is the role of design in driving successful AI implementations? How can data leaders and practitioners adopt a design lens when building with AI? Here to answer these questions is Haris Butt, Head of Product Design at ClickUp. ClickUp is a project management tool that's been making a big bet on AI, and Haris plays a key role in shaping how AI is embedded within the platform. Throughout the episode, Adel & Haris spoke about the role of design in driving human-centered AI experiences, the iterative process of designing with large language models, how to design AI experiences that promote trust, how designing for AI differs from traditional software, whether good design will ultimately end up killing prompt engineering, and a lot more.

#157 Is AI an Existential Risk? With Trond Arne Undheim, Research Scholar in Global Systemic Risk at Stanford University

2023-10-02 Listen
podcast_episode
Trond Arne Undheim (Stanford University)

It's been almost a year since ChatGPT was released, mainstreaming AI into the collective consciousness in the process. Since that moment, we've seen a really spirited debate emerge within the data & AI communities, and really public discourse at large. The focal point of this debate is whether AI is or will lead to existential risk for the human species at large. We've seen thinkers such as Elizier Yudkowski, Yuval Noah Harari, and others sound the alarm bell on how AI is as dangerous, if not more dangerous than nuclear weapons. We've also seen AI researchers and business leaders sign petitions and lobby government for strict regulation on AI.  On the flip side, we've also seen luminaries within the field such as Andrew Ng and Yan Lecun, calling for, and not against, the proliferation of open-source AI. So how do we maneuver this debate, and where does the risk spectrum actually lie with AI? More importantly, how can we contextualize the risk of AI with other systemic risks humankind faces? Such as climate change, risk of nuclear war, and so on and so forth? How can we regulate AI without falling into the trap of regulatory capture—where a select and mighty few benefit from regulation, drowning out the competition in the meantime? Trond Arne Undheim is a Research scholar in Global Systemic Risk, Innovation, and Policy at Stanford University, Venture Partner at Antler, and CEO and co-founder of Yegii, an insight network with experts and knowledge assets on disruption. He is a nonresident Fellow at the Atlantic Council with a portfolio in artificial intelligence, future of work, data ethics, emerging technologies, and entrepreneurship. He is a former director of MIT Startup Exchange and has helped launch over 50 startups. In a previous life, he was an MIT Sloan School of Management Senior Lecturer, WPP Oracle Executive, and EU National Expert. In this episode, Trond and Adel explore the multifaceted risks associated with AI, the cascading risks lens and the debate over the likelihood of runaway AI. Trond shares the role of governments and organizations in shaping AI's future, the need for both global and regional regulatory frameworks, as well as the importance of educating decision-makers on AI's complexities. Trond also shares his opinion on the contrasting philosophies behind open and closed-source AI technologies, the risk of regulatory capture, and more.  Links mentioned in the show: Augmented Lean: A Human-Centric Framework for Managing Frontline Operations by Trond Arne Undheim & Natan LinderFuture Tech: How to Capture Value from Disruptive Industry Trends Trond Arne UndheimFuturized PodcastStanford Cascading Risk StudyCourse: AI Ethics

#156 Making Better Decisions using Data & AI with Cassie Kozyrkov, Google's First Chief Decision Scientist

2023-09-25 Listen
podcast_episode
Richie (DataCamp) , Cassie Kozyrkov (Google)

From the dawn of humanity, decisions, both big and small, have shaped our trajectory. Decisions have built civilizations, forged alliances, and even charted the course of our very evolution. And now, as data & AI become more widespread, the potential upside for better decision making is massive. Yet, like any technology, the true value of data & AI is realized by how we wield it.  We're often drawn to the allure of the latest tools and techniques, but it's crucial to remember that these tools are only as effective as the decisions we make with them. ChatGPT is only as good as the prompt you decide to feed it and what you decide to do with the output. A dashboard is only as good as the decisions that it influences. Even a data science team is only as effective as the value they deliver to the organization.  So in this vast landscape of data and AI, how can we master the art of better decision making? How can we bridge data & AI with better decision intelligence? ​​Cassie Kozyrkov founded the field of Decision Intelligence at Google where, until recently, she served as Chief Decision Scientist, advising leadership on decision process, AI strategy, and building data-driven organizations. Upon leaving Google, Cassie started her own company of which she is the CEO, Data Scientific. In almost 10 years at the company, Cassie personally trained over 20,000 Googlers in data-driven decision-making and AI and has helped over 500 projects implement decision intelligence best practices. Cassie also previously served in Google's Office of the CTO as Chief Data Scientist, and the rest of her 20 years of experience was split between consulting, data science, lecturing, and academia.  Cassie is a top keynote speaker and a beloved personality in the data leadership community, followed by over half a million tech professionals. If you've ever went on a reading spree about AI, statistics, or decision-making, chances are you've encountered her writing, which has reached millions of readers.  In the episode Cassie and Richie explore misconceptions around data science, stereotypes associated with being a data scientist, what the reality of working in data science is, advice for those starting their career in data science, and the challenges of being a data ‘jack-of-all-trades’.  Cassie also shares what decision-science and decision intelligence are, what questions to ask future employers in any data science interview, the importance of collaboration between decision-makers and domain experts, the differences between data science models and their real-world implementations, the pros and cons of generative AI in data science, and much more.  Links mentioned in the Show: Data scientist: The sexiest job of the 22nd centuryThe Netflix PrizeAI Products: Kitchen AnalogyType one, Two & Three Errors in StatisticsCourse: Data-Driven Decision Making for BusinessRadar: Data & AI Literacy...

#154 Building Ethical Machines with Reid Blackman, Founder & CEO at Virtue Consultants

2023-09-11 Listen
podcast_episode
Reid Blackman (Virtue)

It's been a year since ChatGPT burst onto the scene. It has given many of us a sense of the power and potential that LLMs hold in revolutionizing the global economy. But the power that generative AI brings also comes with inherent risks that need to be mitigated.  For those working in AI, the task at hand is monumental: to chart a safe and ethical course for the deployment and use of artificial intelligence. This isn't just a challenge; it's potentially one of the most important collective efforts of this decade. The stakes are high, involving not just technical and business considerations, but ethical and societal ones as well. How do we ensure that AI systems are designed responsibly? How do we mitigate risks such as bias, privacy violations, and the potential for misuse? How do we assemble the right multidisciplinary mindset and expertise for addressing AI safety?  Reid Blackman, Ph.D., is the author of “Ethical Machines” (Harvard Business Review Press), creator and host of the podcast “Ethical Machines,” and Founder and CEO of Virtue, a digital ethical risk consultancy. He is also an advisor to the Canadian government on their federal AI regulations, was a founding member of EY’s AI Advisory Board, and a Senior Advisor to the Deloitte AI Institute. His work, which includes advising and speaking to organizations including AWS, US Bank, the FBI, NASA, and the World Economic Forum, has been profiled by The Wall Street Journal, the BBC, and Forbes. His written work appears in The Harvard Business Review and The New York Times. Prior to founding Virtue, Reid was a professor of philosophy at Colgate University and UNC-Chapel Hill. In the episode, Reid and Richie discuss the dominant concerns in AI ethics, from biased AI and privacy violations to the challenges introduced by generative AI, such as manipulative agents and IP issues. They delve into the existential threats posed by AI, including shifts in the job market and disinformation. Reid also shares examples where unethical AI has led to AI projects being scrapped, the difficulty in mitigating bias, preemptive measures for ethical AI and much more.  Links mentioned in the show: Ethical Machines by Reid BlackmanVirtue Ethics ConsultancyAmazon’s Scrapped AI Recruiting ToolNIST AI Risk Management FrameworkCourse: AI EthicsDataCamp Radar: Data & AI Literacy

#153 From Data Literacy to AI Literacy with Cindi Howson, Chief Data Strategy Officer at ThoughtSpot

2023-09-04 Listen
podcast_episode
Adel (DataFramed) , Cindi Howson (ThoughtSpot)

For the past few years, we've seen the importance of data literacy and why organizations must invest in a data-driven culture, mindset, and skillset. However, as generative AI tools like ChatGPT have risen to prominence in the past year, AI literacy has never been more important. But how do we begin to approach AI literacy? Is it an extension of data literacy, a complement, or a new paradigm altogether? How should you get started on your AI literacy ambitions?  Cindi Howson is the Chief Data Strategy Officer at ThoughtSpot and host of The Data Chief podcast. Cindi is a data analytics, AI, and BI thought leader and an expert with a flair for bridging business needs with technology. As Chief Data Strategy Officer at ThoughtSpot, she advises top clients on data strategy and best practices to become data-driven, speaks internationally on top trends such as AI ethics, and influences ThoughtSpot’s product strategy.

Cindi was previously a Gartner Research Vice President, the lead author for the data and analytics maturity model and analytics and BI Magic Quadrant, and a popular keynote speaker. She introduced new research in data and AI for good, NLP/BI Search, and augmented analytics, bringing both BI bake-offs and innovation panels to Gartner globally. She’s frequently quoted in MIT, Harvard Business Review, and Information Week. She is rated a top 12 influencer in big data and analytics by Analytics Insight, Onalytca, Solutions Review, and Humans of Data.

In the episode, Cindi and Adel discuss how generative AI accelerates an organization’s data literacy, how leaders can think beyond data literacy and start to think about AI literacy, the importance of responsible use of AI, how to best communicate the value of AI within your organization, what generative AI means for data teams, AI use-cases in the data space, the psychological barriers blocking AI adoption, and much more. 

Links Mentioned in the Show: The Data Chief Podcast  ThoughtSpot Sage  BloombergGPT  Radar: Data & AI Literacy Course: AI Ethics  Course: Generative AI Concepts Course: Implementing AI Solutions in Business 

#149 Expanding the Scope of Generative AI in the Enterprise with Bal Heroor, CEO and Principal at Mactores

2023-08-07 Listen
podcast_episode
Bal Heroor (Mactores) , Richie (DataCamp)

Generative AI is here to stay—even in the 8 months since the public release of ChatGPT, there are an abundance of AI tools to help make us more productive at work and ease the stress of planning and execution of our daily lives among other things.  Already, many of us are wondering what is to come in the next 8 months, the next year, and the next decade of AI’s evolution. In the grand scheme of things, this really is just the beginning. But what should we expect in this Cambrian explosion of technology? What are the use cases being developed behind the scenes? What do we need to be mindful of when training the next generations of AI? Can we combine multiple LLMs to get better results? Bal Heroor is CEO and Principal at Mactores and has led over 150 business transformations driven by analytics and cutting-edge technology. His team at Mactores are researching and building AI, AR/VR, and Quantum computing solutions for business to gain a competitive advantage. Bal is also the Co-Founder of Aedeon—the first hyper-scale Marketplace for Data Analytics and AI talent. In the episode, Richie and Bal explore common use cases for generative AI, how it's evolving to solve enterprise problems, challenges of data governance and the importance of explainable AI, the challenges of tracking the lineage of AI and data in large organizations. Bal also touches on the shift from general-purpose generative AI models to more specialized models, fascinating use cases in the manufacturing industry, what to consider when adopting AI solutions in business, and much more. Links mentioned in the show: PulsarTrifactaAWS Clarify[Course] Introduction to ChatGPT[Course] Implementing AI Solutions in Business[Course] Generative AI Concepts

#148 Why AI is Eating the World with Daniel Jeffries, Managing Director at AI Infrastructure Alliance

2023-07-31 Listen
podcast_episode
Daniel Jeffries (AI Infrastructure Alliance) , Adel (DataFramed)

'Software is eating the world’ is a truism coined by Mark Andreesen, General Partner at Andreesen Horowitz. This was especially evident during the shift from analog mediums to digital at the turn of the century. Software companies have essentially usurped and replaced their non-digital predecessors. Amazon was the largest bookseller, Netflix was the largest movie "rental" service, Spotify or Apple were the largest music providers. Today, AI is starting to eat the world. However, we are still at the early start of the AI revolution, with AI set to become embedded in almost every piece of software we interact with. An AI ecosystem that touches every aspect of our lives is what today’s guest describes as ‘Ambient AI’. But what can we expect from this ramp up to Ambient AI? How will it change the way we work? What do we need to be mindful of as we develop this technology? Daniel Jeffries is the Managing Director of the AI Infrastructure Alliance and former CIO at Stability AI, the company responsible for Stable Diffusion, the popular open-source image generation model. He’s also an author, engineer, futurist, pro blogger and he’s given talks all over the world on AI and cryptographic platforms. In the episode, Adel and Daniel discuss how to define ambient AI, how our relationship with work will evolve as we become more reliant on AI, what the AI ecosystem is missing to rapidly scale adoption, why we need to accelerate the maturity of the open source AI ecosystem, how AI existential risk discourse takes away focus from real AI risk, and a lot lot more.

Links Mentioned in the Show Daniel’s Writing on MediumDaniel’s SubstackAI Infrastructure AllianceStability AIFrancois CholletRed Pajama DatasetRun AIWill Superintelligent AI End the World? By Eliezer Yudkowsky Nick Bostrom’s Paper Clip MaximizerThe pessimist archive [Course] Introduction to ChatGPT[Course] Implementing AI Solutions in Business

#147 The Past, Present & Future of Generative AI—With Joanne Chen, General Partner at Foundation Capital

2023-07-24 Listen
podcast_episode
Joanne Chen (Foundation Capital) , Richie (DataCamp)

In a time when AI is evolving at breakneck speeds, taking a step back and gaining a bird's-eye view of the evolving AI ecosystem is paramount to understanding where the field is headed. With this bird's-eye view come a series of questions. Which trends will dominate generative AI in the foreseeable future? What are the truly transformative use-cases that will reshape our business landscape? What does the skills economy look like in an age of hyper intelligence? Enter Joanne Chen, General Partner at Foundation Capital. Joanne invests in early-stage AI-first B2B applications and data platforms that are the building blocks of the automated enterprise. She has shared her learnings as a featured speaker at conferences, including CES, SXSW, WebSummit, and has spoken about the impact of AI on society in her TED talk titled "Confessions of an AI Investor." Joanne began her career as an engineer at Cisco Systems and later co-founded a mobile gaming company. She also spent many years working on Wall Street at Jefferies & Company, helping tech companies go through the IPO and M&A processes, and at Probitas Partners, advising venture firms on their fundraising process. Throughout the episode, Richie and Joanne cover emerging trends in generative AI, business use cases that have emerged in the past year since the advent of tools like ChatGPT, the role of AI in augmenting work, the ever-changing job market and AI's impact on it, as well as actionable insights for individuals and organizations wanting to adopt AI. Links mentioned in the show: JasperAIAnyScaleCerebras[Course] Introduction to ChatGPT[Course] Implementing AI Solutions in Business[Course] Generative AI Concepts

#145 Why AI will Change Everything—with Former Snowflake CEO, Bob Muglia

2023-07-10 Listen
podcast_episode
Bob Muglia (Snowflake; Microsoft) , Richie (DataCamp)

Data and AI are advancing at an unprecedented rate—and while the jury is still out on achieving superintelligent AI systems, the idea of artificial intelligence that can understand and learn anything—an “artificial general intelligence”—is becoming more likely. What does the rise of AI mean for the future of software and work as we know it? How will AI help reinvent most of the ways we interact with the digital and physical world? Bob Muglia is a data technology investor and business executive, former CEO of Snowflake, and past president of Microsoft's Server and Tools Division. As a leader in data & AI, Bob focuses on how innovation and ethical values can merge to shape the data economy's future in the era of AI. He serves as a board director for emerging companies that seek to maximize the power of data to help solve some of the world's most challenging problems. In the episode, Richie and Bob explore the current era of AI and what it means for the future of software. Throughout the episode, they discuss how to approach driving value with large language models, the main challenges organizations face when deploying AI systems, the risks, and rewards of fine-tuning LLMs for specific use cases, what the next 12 to 18 months hold for the burgeoning AI ecosystem, the likelihood of superintelligence within our lifetimes, and more. Links from the show: The Datapreneurs by Bob Muglia and Steve HammThe Singularity is Near by Ray KurzweilIsaac AsimovSnowflakePineconeDocugamiOpenAI/GPT-4The Modern Data Stack

#142 Is Data Science Still the Sexiest Job of the 21st Century?

2023-06-19 Listen
podcast_episode
Thomas Davenport (Babson College)

About 10 years ago, Thomas Davenport & DJ Patil published the article "Data Scientist: The Sexiest Job of the 21st Century" in the Harvard Business Review. In this piece, they described the bourgeoning role of the data scientist and what it will mean for organizations and individuals in the coming decade. As time has passed, data science has become increasingly institutionalized. Once seen as a luxury, it is now deemed a necessity in every modern boardroom. Moreover as technologies like AI and systems like ChatGPT keep astonishing us with their capabilities in handling data science tasks, it raises a pertinent question: Is Data Science Still the Sexiest Job of the 21st Century? In this episode, we invited Thomas Davenport on the show to share his perspective on where data science & AI are at today, and where they are headed. Thomas Davenport is the President’s Distinguished Professor of Information Technology and Management at Babson College, the co-founder of the International Institute for Analytics, a Fellow of the MIT Initiative for the Digital Economy, and a Senior Advisor to Deloitte Analytics. He has written or edited twenty books and over 250 print or digital articles for Harvard Business Review (HBR), Sloan Management Review, the Financial Times, and many other publications. One of HBR’s most frequently published authors, Thomas has been at the forefront of the Process Innovation, Knowledge Management, and Analytics and Big Data movements. He pioneered the concept of “competing on analytics” with his 2006 Harvard Business Review article and his 2007 book by the same name. Since then, he has continued to provide cutting-edge insights on how companies can use analytics and big data to their advantage, and then on artificial intelligence. Throughout the episode, we discuss how data science has changed since he first published his article, how it has become more institutionalized, how data leaders can drive value with data science, the importance of data culture, his views on AI and where he thinks its going, and a lot more. Links from the Show: Working with AI by Thomas Davenport The AI Advantage: How to Put the Artificial Intelligence Revolution to Work by Thomas Davenport Harvard Business Review New Vantage Partners CCC Intelligent Solutions Radar AI

[DataFramed AI Series #4] Building AI Products with ChatGPT

2023-05-11 Listen
podcast_episode
Joaquin Marques (Kanayma LLC)

Although many have been cognizant of AI’s value in recent months, the further back we look, the more exclusive this group of people becomes. In our latest AI-series episodes of DataFramed, we gain insight from an expert who has been part of the industry for 40 years. Joaquin Marques, Founder and Principal Data Scientist at Kanayma LLC has been working in AI since 1983. With experience at major tech companies like IBM, Verizon, and Oracle, Joaquin's knowledge of AI is vast. Today, he leads an AI consultancy, Kanayma, where he creates innovative AI products. Throughout the episode, Joaquin shares his insights on AI's development over the years, its current state, and future possibilities. Joaquin also shares the exciting projects they've worked on at Kanayma as well as what to consider when building AI products, and how ChatGPT is making chatbots better. Joaquin goes beyond providing insight into the space, encouraging listeners to think about the practical consequences of implementing AI, with Joaquin sharing the finer technical details of many of the solutions he’s helped build. Joaquin also shares many of the thought processes that have helped him move forward when building AI products, providing context on many practical applications of AI, both from his past and the bleeding edge of today.   The discussion examines the complexities of artificial intelligence, from the perspective of someone that has been focused on this technology for more than most. Tune in for guidance on how to build AI into your own company's products.

[DataFramed AI Series #3] GPT and Generative AI for Data Teams

2023-05-10 Listen
podcast_episode
Sarah Schlobohm (Kubrick Group)

With the advances in AI products and the explosion of ChatGPT in recent months, it is becoming easier to imagine a world where AI and humans work seamlessly together—revolutionizing how we solve complex problems and transform our daily lives. This is especially the case for data professionals. In this episode of our AI series, we speak to Sarah Schlobohm, Head of AI at Kubrick Group. Dr. Schlobohm leads the training of the next generation of machine learning engineers. With a background in finance and consulting, Sarah has a deep understanding of the intersection between business strategy, data science, and AI. Prior to her work in finance, Sarah became a chartered accountant, where she honed her skills in financial analysis and strategy. Sarah worked for one of the world's largest banks, where she used data science to fight financial crime, making significant contributions to the industry's efforts to combat money laundering and other illicit activities. Sarah shares her extensive knowledge on incorporating AI within data teams for maximum impact, covering a wide array of AI-related topics, including upskilling, productivity, and communication, to help data professionals understand how to integrate generative AI effectively in their daily work. Throughout the episode, Sarah explores the challenges and risks of AI integration, touching on the balance between privacy and utility. She highlights the risks data teams can avoid when using AI products and how to approach using AI products the right way. She also covers how different roles within a data team might make use of generative AI, as well as how it might effect coding ability going forward. Sarah also shares use cases for those in non-data teams, such as marketing, while also highlighting what to consider when using outputs from GPT models. Sarah shares the impact chatbots might have on education calling attention to the power of AI tutors in schools. Sarah encourages people to start using AI now, considering the barrier to entry is so low, and how that might not be the case going forward. From automating mundane tasks to enabling human-AI collaboration that makes work more enjoyable, Sarah underscores the transformative power of AI in shaping the future of humanity. Whether you're an AI enthusiast, data professional, or someoone with an interest in either this episode will provide you with a deeper understanding of the practical aspects of AI implementation.

[DataFramed AI Series #2] How Organizations can Leverage ChatGPT

2023-05-09 Listen
podcast_episode
Noelle Silver Russell (Accenture)

With the advent of any new technology that promises to make humans lives easier, replacing concious actions with automation, there is always backlash. People are often aware of the displacement of jobs, and often, it is viewed in a negative light. But how do we try to change the collective understanding to one of hope and excitement? What use cases can be shared that will change the opinion of those that are weary of AI?  Noelle Silver Russell is the Global AI Solutions & Generative AI & LLM Industry Lead at Accenture, responsible for enterprise-scale industry playbooks for generative AI and LLMs. In this episode of our AI series, Noelle discusses how to prioritize ChatGPT use cases by focusing on the different aspects of value creation that GPT models can bring to individuals and organizations. She addresses common misconceptions surrounding ChatGPT and AI in general, emphasizing the importance of understanding their potential benefits and selecting use cases that maximize positive impact, foster innovation, and contribute to job creation. Noelle draws parallels between the fast-moving AI projects today and the launch of Amazon Alexa, which she worked on, and points out that many of the discussions being raised today were also talked about 10 years ago. She discusses how companies can now use AI to focus both on business efficiencies and customer experience, no longer having to settle for a trade-off between the two. Noelle explains the best way for companies to approach adding GPT tools into their processes, which focusses on taking a holistic view to implementation. She also recommends use-cases for companies that are just beginning to use AI, as well as the challenges they might face when deploying models into production, and how they can mitigate them.  On the topic of the displacement of jobs, Noelle draws parallels from when Alexa was launched, and how it faced similar criticisms, digging into the fear that people have around new technology, which could be transformed into enthusiasm. Noelle suggests that there is a burden on leadership within organizations to create a culture where people are excited to use AI tools, rather than feeling threatened by them.

[DataFramed AI Series #1] ChatGPT and the OpenAI Developer Ecosystem

2023-05-08 Listen
podcast_episode
Logan Kilpatrick (OpenAI)

ChatGPT has leaped into the forefront of our lives—everyone from students to multinational organizations are seeing value in adding a chat interface to an LLM. But OpenAI has been concentrating on this for years, steadily developing one of the most viral digital products this century. In this episode of our AI series, we sit down with Logan Kilpatrick. Logan currently leads developer relations at OpenAI, supporting developers building with DALL-E, the OpenAI API, and ChatGPT. Logan takes us through OpenAI’s products, API, and models, and provides insights into the many use cases of ChatGPT.  Logan provides fascinating information on ChatGPT’s plugins and how they can be used to build agents that help us in a variety of contexts. He also discusses the future integration of LLMs into our daily lives and how it will add structure to the unstructured nature and difficult-to-leverage data we generate and interact with on a daily basis. Logan also touches on the powerful image input features in GPT4, how it can help those with partial sight to improve their quality of life, and how it can be used for various other use cases. Throughout the episode, we unpack the need for collaboration and innovation, due to ChatGPT becoming more powerful when integrated with other pieces of software. Covering key discussion points with regard to AI tools currently, in particular, what could be built in-house by OpenAI and what could be built in the public domain. Logan also discusses the ecosystem forming around ChatGPT and how it will all become connected going forward. Finally, Logan shares tips for getting better responses from ChatGPT and the things to consider when integrating it into your organization’s product.  This episode provides a deep dive into the world of GPT models from within the eye of the storm, providing valuable insights to those interested in AI and its practical applications in our daily lives.