talk-data.com talk-data.com

Topic

LLM

Large Language Models (LLM)

nlp ai machine_learning

56

tagged

Activity Trend

158 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: DataFramed ×

As AI becomes more accessible, a growing question is: should machine learning experts always be the ones training models, or is there a better way to leverage other subject matter experts in the business who know the use-case best? What if getting started building AI apps required no coding skills? As businesses look to implement AI at scale, what part can no-code AI apps play in getting projects off the ground, and how feasible are smaller, tailored solutions for  department specific use-cases? Birago Jones is the CEO at Pienso. Pienso is an AI platform that empowers subject matter experts in various enterprises, such as business analysts, to create and fine-tune AI models without coding skills. Prior to Pienso, Birago was a Venture Partner at Indicator Ventures and a Research Assistant at MIT Media Lab where he also founded the Media Lab Alumni Association. Karthik Dinakar is a computer scientist specializing in machine learning, natural language processing, and human-computer interaction. He is the Chief Technology Officer and co-founder at Pienso. Prior to founding Pienso, Karthik held positions at Microsoft and Deutsche Bank. Karthik holds a doctoral degree from MIT in Machine Learning. In the episode, Richie, Birago and Karthik explore why no-code AI apps are becoming more prominent, uses-cases of no-code AI apps, the steps involved in creating an LLM, the benefits of small tailored models, how no-code can impact workflows, cost in AI projects, AI interfaces and the rise of the chat interface, privacy and customization, excitement about the future of AI, and much more.  Links Mentioned in the Show: PiensoGoogle Gemini for BusinessConnect with Birago and KarthikAndreesen Horowitz Report: Navigating the High Cost of AI ComputeCourse: Artificial Intelligence (AI) StrategyRelated Episode: Designing AI Applications with Robb Wilson, Co-Founder & CEO at Onereach.aiRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Lot’s of AI use-cases can start with big ideas and exciting possibilities, but turning those ideas into real results is where the challenge lies. How do you take a powerful model and make it work effectively in a specific business context? What steps are necessary to fine-tune and optimize your AI tools to deliver both performance and cost efficiency? And as AI continues to evolve, how do you stay ahead of the curve while ensuring that your solutions are scalable and sustainable?  Lin Qiao is the CEO and Co-Founder of Fireworks AI. She previously worked at Meta as a Senior Director of Engineering and as head of Meta's PyTorch, served as a Tech Lead at Linkedin, and worked as a Researcher and Software Engineer at IBM.  In the episode, Richie and Lin explore generative AI use cases, getting AI into products, foundational models, the effort required and benefits of fine-tuning models, trade-offs between models sizes, use cases for smaller models, cost-effective AI deployment, the infrastructure and team required for AI product development, metrics for AI success, open vs closed-source models, excitement for the future of AI development and much more.  Links Mentioned in the Show: Fireworks.aiHugging Face - Preference Tuning LLMs with Direct Preference Optimization MethodsConnect with LinCourse - Artificial Intelligence (AI) StrategyRelated Episode: Creating Custom LLMs with Vincent Granville, Founder, CEO & Chief Al Scientist at GenAltechLab.comRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Perhaps the biggest complaint about generative AI is hallucination. If the text you want to generate involves facts, for example, a chatbot that answers questions, then hallucination is a problem. The solution to this is to make use of a technique called retrieval augmented generation, where you store facts in a vector database and retrieve the most appropriate ones to send to the large language model to help it give accurate responses. So, what goes into building vector databases and how do they improve LLM performance so much? Ram Sriharsha is currently the CTO at Pinecone. Before this role, he was the Director of Engineering at Pinecone and previously served as Vice President of Engineering at Splunk. He also worked as a Product Manager at Databricks. With a long history in the software development industry, Ram has held positions as an architect, lead product developer, and senior software engineer at various companies. Ram is also a long time contributor to Apache Spark.  In the episode, Richie and Ram explore common use-cases for vector databases, RAG in chatbots, steps to create a chatbot, static vs dynamic data, testing chatbot success, handling dynamic data, choosing language models, knowledge graphs, implementing vector databases, innovations in vector data bases, the future of LLMs and much more.  Links Mentioned in the Show: PineconeWebinar - Charting the Path: What the Future Holds for Generative AICourse - Vector Databases for Embeddings with PineconeRelated Episode: The Power of Vector Databases and Semantic Search with Elan Dekel, VP of Product at PineconeRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

By now, many of us are convinced that generative AI chatbots like ChatGPT are useful at work. However, many executives are rightfully worried about the risks from having business and customer conversations recorded by AI chatbot platforms. Some privacy and security-conscious organizations are going so far as to block these AI platforms completely. For organizations such as EY, a company that derives value from its intellectual property, leaders need to strike a balance between privacy and productivity.  John Thompson runs the department for the ideation, design, development, implementation, & use of innovative Generative AI, Traditional AI, & Causal AI solutions, across all of EY's service lines, operating functions, geographies, & for EY's clients. His team has built the world's largest, secure, private LLM-based chat environment. John also runs the Marketing Sciences consultancy, advising clients on monetization strategies for data. He is the author of four books on data, including "Data for All' and "Causal Artificial Intelligence". Previously, he was the Global Head of AI at CSL Behring, an Adjunct Professor at Lake Forest Graduate School of Management, and an Executive Partner at Gartner. In the episode, Richie and John explore the adoption of GenAI at EY, data privacy and security, GenAI use cases and productivity improvements, GenAI for decision making, causal AI and synthetic data, industry trends and predictions and much more.  Links Mentioned in the Show: Azure OpenAICausality by Judea Pearl[Course] AI EthicsRelated Episode: Data & AI at Tesco with Venkat Raghavan, Director of Analytics and Science at TescoCatch John talking about AI Maturity this SeptemberRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Meta has been at the absolute edge of the open-source AI ecosystem, and with the recent release of Llama 3.1, they have officially created the largest open-source model to date. So, what's the secret behind the performance gains of Llama 3.1? What will the future of open-source AI look like? Thomas Scialom is a Senior Staff Research Scientist (LLMs) at Meta AI, and is one of the co-creators of the Llama family of models. Prior to joining Meta, Thomas worked as a Teacher, Lecturer, Speaker and Quant Trading Researcher.  In the episode, Adel and Thomas explore Llama 405B it’s new features and improved performance, the challenges in training LLMs, best practices for training LLMs, pre and post-training processes, the future of LLMs and AI, open vs closed-sources models, the GenAI landscape, scalability of AI models, current research and future trends and much more.  Links Mentioned in the Show: Meta - Introducing Llama 3.1: Our most capable models to dateDownload the Llama Models[Course] Working with Llama 3[Skill Track] Developing AI ApplicationsRelated Episode: Creating Custom LLMs with Vincent Granville, Founder, CEO & Chief Al Scientist at GenAltechLab.comRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

This special episode of DataFramed was made in collaboration with Analytics on Fire! Nowadays, the hype around generative AI is only the tip of the iceberg. There are so many ideas being touted as the next big thing that it’s difficult to keep up. More importantly, it’s challenging to discern which ideas will become the next ChatGPT and which will end up like the next NFT. How do we cut through the noise? Mico Yuk is the Community Manager at Acryl Data and Co-Founder at Data Storytelling Academy. Mico is also an SAP Mentor Alumni, and the Founder of the popular weblog, Everything Xcelsius and the 'Xcelsius Gurus’ Network. She was named one of the Top 50 Analytics Bloggers to follow, as-well-as a high-regarded BI influencer and sought after global keynote speaker in the Analytics ecosystem.  In the episode, Richie and Mico explore AI and productivity at work, the future of work and AI, GenAI and data roles, AI for training and learning, training at scale, decision intelligence, soft skills for data professionals, genAI hype and much more.  Links Mentioned in the Show: Analytics on Fire PodcastData Visualization for Dummies by Mico Yuk and Stephanie DiamondConnect with Miko[Skill Track] AI FundamentalsRelated Episode: What to Expect from AI in 2024 with Craig S. Smith, Host of the Eye on A.I PodcastRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Despite GPT, Claude, Gemini, LLama and the other host of LLMs that we have access to, a variety of organizations are still exploring their options when it comes to custom LLMs. Logging in to ChatGPT is easy enough, and so is creating a 'custom' openAI GPT, but what does it take to create a truly custom LLM? When and why might this be useful, and will it be worth the effort? Vincent Granville is a pioneer in the AI and machine learning space, he is Co-Founder of Data Science Central, Founder of MLTechniques.com, former VC-funded executive, author, and patent owner. Vincent’s corporate experience includes Visa, Wells Fargo, eBay, NBC, Microsoft, and CNET. He is also a former post-doc at Cambridge University and the National Institute of Statistical Sciences. Vincent has published in the Journal of Number Theory, Journal of the Royal Statistical Society, and IEEE Transactions on Pattern Analysis and Machine Intelligence. He is the author of multiple books, including “Synthetic Data and Generative AI”. In the episode, Richie and Vincent explore why you might want to create a custom LLM including issues with standard LLMs and benefits of custom LLMs, the development and features of custom LLMs, architecture and technical details, corporate use cases, technical innovations, ethics and legal considerations, and much more.  Links Mentioned in the Show: Read Articles by VincentSynthetic Data and Generative AI by Vincent GranvilleConnect with Vincent on Linkedin[Course] Developing LLM Applications with LangChainRelated Episode: The Power of Vector Databases and Semantic Search with Elan Dekel, VP of Product at PineconeRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

From data science to software engineering, Large Language Models (LLMs) have emerged as pivotal tools in shaping the future of programming. In this session, Michele Catasta, VP of AI at Replit, Jordan Tigani, CEO at Motherduck, and Ryan J. Salva, VP of Product at GitHub, will explore practical applications of LLMs in coding workflows, how to best approach integrating AI into the workflows of data teams, what the future holds for AI-assisted coding, and a lot more. Links Mentioned in the Show: Rewatch Session from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Memory, the foundation of human intelligence, is still one of the most complex and mysterious aspects of the brain. Despite decades of research, we've only scratched the surface of understanding how our memories are formed, stored, and retrieved. But what if AI could help us crack the code on memory? How might AI be the key to unlocking problems that have evaded human cognition for so long? Kim Stachenfeld is a Senior Research Scientist at Google DeepMind in NYC and Affiliate Faculty at the Center for Theoretical Neuroscience at Columbia University.  Her research covers topics in Neuroscience and AI. On the Neuroscience side, she study how animals build and use models of their world that support memory and prediction. On the Machine Learning side, she works on implementing these cognitive functions in deep learning models. Kim’s work has been featured in The Atlantic, Quanta Magazine, Nautilus, and MIT Technology Review. In 2019, she was named one of MIT Tech Review’s Innovators under 35 for her work on predictive representations in hippocampus.  In the episode, Richie and Kim explore her work on Google Gemini, the importance of customizability in AI models, the need for flexibility and adaptability in AI models, retrieval databases and how they improve AI response accuracy, AI-driven science, the importance of augmenting human capabilities with AI and the challenges associated with this goal, the intersection of AI, neuroscience and memory and much more.  Links Mentioned in the Show: DeepMindAlphaFoldDr James Whittington - A unifying framework for frontal and temporal representation of memoryPaper - Language models show human-like content effects onreasoning tasksKim’s Website[Course] Artificial Intelligence (AI) StrategyRelated Episode: Making Better Decisions using Data & AI with Cassie Kozyrkov, Google's First Chief Decision ScientistSign up to RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Since the launch of ChatGPT, one of the trending terms outside of ChatGPT itself has been prompt engineering. This act of carefully crafting your instructions is treated as alchemy by some and science by others. So what makes an effective prompt? Alex Banks has been building and scaling AI products since 2021. He writes Sunday Signal, a newsletter offering a blend of AI advancements and broader thought-provoking insights. His expertise extends to social media platforms on X/Twitter and LinkedIn, where he educates a diverse audience on leveraging AI to enhance productivity and transform daily life. In the episode, Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, strategies for better prompting, chain of thought prompting, prompt engineering as a skill and career path, building your own AI tools rather than using consumer AI products, AI literacy, the future of LLMs and much more.  Links Mentioned in the Show: [Alex’s Free Course on DataCamp] Understanding Prompt EngineeringSunday SignalPrinciples by Ray Dalio: Life and WorkRelated Episode: [DataFramed AI Series #1] ChatGPT and the OpenAI Developer EcosystemRewatch sessions from RADAR: The Analytics Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Generative AI is fantastic but has a major problem: sometimes it "hallucinates", meaning it makes things up. In a business product like a chatbot, this can be disastrous. Vector databases like Pinecone are one of the solutions to mitigating the problem. Vector databases are a key component to any AI application, as well as things like enterprise search and document search. They have become an essential tool for every business, and with the rise in interest in AI in the last couple of years, the space is moving quickly. In this episode, you'll find out how to make use of vector databases, and find out about the latest developments at Pinecone. Elan Dekel is the VP of Product at Pinecone, where he oversees the development of the Pinecone vector database. He was previously Product Lead for Core Data Serving at Google, where he led teams working on the indexing systems to serve data for Google search, YouTube search, and Google Maps. Before that, he was Founder and CEO of Medico, which was acquired by Everyday Health. In the episode, RIchie and Elan explore LLMs, hallucination in generative models, vector databases and the best use-cases for them, semantic search, business applications of vector databases and semantic search, the tech stack for AI applications, cost considerations when investing in AI projects, emerging roles within the AI space, the future of vector databases and AI, and much more.   Links Mentioned in the Show: Pinecone CanopyPinecone ServerlessLlamaIndexLangchain[Code Along] Semantic Search with PineconeRelated Episode: Expanding the Scope of Generative AI in the Enterprise with Bal Heroor, CEO and Principal at MactoresSign up to RADAR: The Analytics Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

We’ve heard so much about the value and capabilities of generative AI over the past year, and we’ve all become accustomed to the chat interfaces of our preferred models. One of the main concerns many of us have had has been privacy. Is OpenAI keeping the data and information I give to ChatGPT secure? One of the touted solutions to this problem is running LLMs locally on your own machine, but with the hardware cost that comes with it, running LLMs locally has not been possible for many of us. That might now be starting to change. Nuri Canyaka is VP of AI Marketing at Intel. Prior to Intel, Nuri spent 16 years at Microsoft, starting out as a Technical Evangelist, and leaving the organization as the Senior Director of Product Marketing. He ran the GTM team that helped generate adoption of GPT in Microsoft Azure products. La Tiffaney Santucci is Intel’s AI Marketing Director, specializing in their Edge and Client products. La Tiffaney has spent over a decade at Intel, focussing on partnerships with Dell, Google Amazon and Microsoft.  In the episode, Richie, Nuri and La Tiffaney explore AI’s impact on marketing analytics, the adoptions of AI in the enterprise, how AI is being integrated into existing products, the workflow for implementing AI into business processes and the challenges that come with it, the importance of edge AI for instant decision-making in uses-cases like self-driving cars, the emergence of AI engineering as a distinct field of work, the democratization of AI, what the state of AGI might look like in the near future and much more.  About the AI and the Modern Data Stack DataFramed Series This week we’re releasing 4 episodes focused on how AI is changing the modern data stack and the analytics profession at large. The modern data stack is often an ambiguous and all-encompassing term, so we intentionally wanted to cover the impact of AI on the modern data stack from different angles. Here’s what you can expect: Why the Future of AI in Data will be Weird with Benn Stancil, CTO at Mode & Field CTO at ThoughtSpot — Covering how AI will change analytics workflows and tools How Databricks is Transforming Data Warehousing and AI with Ari Kaplan, Head Evangelist & Robin Sutara, Field CTO at Databricks — Covering Databricks, data intelligence and how AI tools are changing data democratizationAdding AI to the Data Warehouse with Sridhar Ramaswamy, CEO at Snowflake — Covering Snowflake and its uses, how generative AI is changing the attitudes of leaders towards data, and how to improve your data managementAccelerating AI Workflows with Nuri Cankaya, VP of AI Marketing & La Tiffaney Santucci, AI Marketing Director at Intel — Covering AI’s impact on marketing analytics, how AI is being integrated into existing products, and the democratization of AI Links Mentioned in the Show: Intel OpenVINO™ toolkitIntel Developer Clouds for Accelerated ComputingAWS Re:Invent[Course] Implementing AI Solutions in BusinessRelated Episode: Intel CTO Steve Orrin on How Governments Can Navigate the Data & AI RevolutionSign up to a href="https://www.datacamp.com/radar-analytics-edition"...

Just as many of us have been using generative AI tools to make us more productive at work, so have bad actors. Generative AI makes it much easier to create fake yet convincing text and images that can be used to deceive and harm. We’ve already seen lots of high-profile attempts to leverage AI in phishing campaigns, and this is putting more pressure on cybersecurity teams to get ahead of the curve and combat these new forms of threats. However, AI is also helping those that work in cybersec to be more productive and better equip themselves to create new forms of defense and offense.  Brian Murphy is a founder, CEO, entrepreneur and investor. He founded and leads ReliaQuest, the force multiplier of security operations and one of the largest and fastest-growing companies in the global cybersecurity market. ReliaQuest increases visibility, reduces complexity, and manages risk with its cloud-native security operations platform, GreyMatter. Murphy grew ReliaQuest from a boot-strapped startup to a high-growth unicorn with a valuation of over $1 billion, more than 1,000 team members, and more than $350 million in growth equity with firms such as FTV Capital and KKR Growth.  In the full episode, Adel and Brian cover the evolution of cybersecurity tools, the challenges faced by cybersecurity teams, types of cyber threats, how generative AI can be used both defensively and offensively in cybersecurity, how generative AI tools are making cybersecurity professionals more productive, the evolving role of cybersecurity professionals, the security implications of deploying AI models, the regulatory landscape for AI in cybersecurity and much more.  Links Mentioned in the Show: ReliaQuestReliaQuest BlogIBM finds that ChatGPT can generate phishing emails nearly as convincing as a humanInformation Sharing and Analysis Centers (ISACs)[Course] Introduction to Data SecurityRelated episode: Data Security in the Age of AI with Bart Vandekerckhove, Co-founder at Raito New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

2023 was a huge year for data and AI. Everyone who didn't live under a rock started using generative AI, and much was teased by companies like OpenAI, Microsoft, Google and Meta. We saw the millions of different use cases generative AI could be applied to, as well as the iterations we could expect from the AI space, such as connected multi-modal models, LLMs in mobile devices and formal legislation. But what has this meant for DataCamp? What will we do to facilitate learners and organizations around the world in staying ahead of the curve? In this special episode of DataFramed, we sit down with DataCamp Co-Founders Jo Cornelissen, Chief Executive Officer, and Martijn Theuwissen, Chief Operating Officer, to discuss their expectations for data & AI in 2024. In the episode, Richie, Jo and Martijn discuss generative AI's mainstream impact in 2023, the broad use cases of generative AI and skills required to utilize it effectively, trends in AI and software development, how the programming languages for data are evolving, new roles in data & AI, the job market and skill development in data science and their predictions for 2024. Links Mentioned in the Show: Free course - Become an AI DeveloperWebinar - Data & AI Trends & Predictions 2024 Courses: Artificial Intelligence (AI) StrategyGenerative AI for BusinessImplementing AI Solutions in BusinessAI Ethics

There are a few caveats to using generative AI tools, those caveats have led to a few tips that have quickly become second nature to those that use LLMs like ChatGPT. The main one being: have the domain knowledge to validate the output in order to avoid hallucinations. Hallucinations are one of the weak spots for LLMs due to the nature of the way they are built, as they are trained to correlate data in order to predict what might come next in an incomplete sequence. Does this mean that we’ll always have to be wary of the output of AI products, with the expectation that there is no intelligent decision-making going on under the hood? Far from it. Causal AI is bound by reason—rather than looking at correlation, these exciting systems are able to focus on the underlying causal mechanisms and relationships. As the AI field rapidly evolves, Causal AI is an area of research that is likely to have a huge impact on a huge number of industries and problems.  Paul Hünermund is an Assistant Professor of Strategy and Innovation at Copenhagen Business School. In his research, Dr. Hünermund studies how firms can leverage new technologies in the space of machine learning and artificial intelligence such as Causal AI for value creation and competitive advantage. His work explores the potential for biases in organizational decision-making and ways for managers to counter them. It thereby sheds light on the origins of effective business strategies in markets characterized by a high degree of technological competition and the resulting implications for economic growth and environmental sustainability.  His work has been published in The Journal of Management Studies, the Econometrics Journal, Research Policy, Journal of Product Innovation Management, International Journal of Industrial Organization, MIT Sloan Management Review, and Harvard Business Review, among others.  In the full episode, Richie and Paul explore Causal AI, its differences when compared to other forms of AI, use cases of Causal AI in fields like drug development, marketing, manufacturing, and defense. They also discuss how Causal AI contributes to better decision-making, the role of domain experts in getting accurate results, what happens in the early stages of Causal AI adoption, exciting new developments within the Causal AI space and much more.  Links Mentioned in the Show: Causal Data Science in BusinessCausal AI by causaLensIntro to Causal AI Using the DoWhy Library in PythonLesson: Inference (causal) models

Over the past year, we’ve seen a full hype cycle of hysteria and discourse surrounding generative AI. It almost seems difficult to think back to a time when no one had used ChatGPT. We are in the midst of the fourth industrial revolution, and technology is moving rapidly. Better performing and more capable models are being released at a stunning rate, and with the growing presence of multimodal AI, can we expect another whirlwind year that vastly changes the state of play within AI again? Who might be able to provide insight into what is to come in 2024? Craig S. Smith is an American journalist, former executive of The New York Times, and host of the podcast Eye on AI. Until January 2000, he wrote for The Wall Street Journal, most notably covering the rise of the religious movement Falun Gong in China. He has reported for the Times from more than 40 countries and has covered several conflicts, including the 2001 invasion of Afghanistan, the 2003 war in Iraq, and the 2006 Israeli-Lebanese war. He retired from the Times in 2018 and now writes about artificial intelligence for the Times and other publications. He was a special Government employee for the National Security Commission on Artificial Intelligence until the commission's end in October 2021.  In the episode, Richie and Craig explore the 2023 advancements in generative AI, such as GPT-4, and the evolving roles of companies like Anthropic and Meta, practical AI applications for research and image generation, challenges in large language models, the promising future of world models and AI agents, the societal impacts of AI, the issue of misinformation, computational constraints, and the importance of AI literacy in the job market, the transformative potential of AI in various sectors and much more.  Links Mentioned in the Show: Eye on AIWayveAnthropicCohereMidjourneyYann Lecun

In today's AI landscape, organizations are actively exploring how to seamlessly embed AI into their products, systems, processes, and workflows. The success of ChatGPT stands as a testament to this. Its success is not solely due to the performance of the underlying model; a significant part of its appeal lies in its human-centered user experience, particularly its chat interface. Beyond the foundational skills, infrastructure, and tools, it's clear that great design is a crucial ingredient in building memorable AI experiences. How do you build human-centered AI experiences? What is the role of design in driving successful AI implementations? How can data leaders and practitioners adopt a design lens when building with AI? Here to answer these questions is Haris Butt, Head of Product Design at ClickUp. ClickUp is a project management tool that's been making a big bet on AI, and Haris plays a key role in shaping how AI is embedded within the platform. Throughout the episode, Adel & Haris spoke about the role of design in driving human-centered AI experiences, the iterative process of designing with large language models, how to design AI experiences that promote trust, how designing for AI differs from traditional software, whether good design will ultimately end up killing prompt engineering, and a lot more.

It's been almost a year since ChatGPT was released, mainstreaming AI into the collective consciousness in the process. Since that moment, we've seen a really spirited debate emerge within the data & AI communities, and really public discourse at large. The focal point of this debate is whether AI is or will lead to existential risk for the human species at large. We've seen thinkers such as Elizier Yudkowski, Yuval Noah Harari, and others sound the alarm bell on how AI is as dangerous, if not more dangerous than nuclear weapons. We've also seen AI researchers and business leaders sign petitions and lobby government for strict regulation on AI.  On the flip side, we've also seen luminaries within the field such as Andrew Ng and Yan Lecun, calling for, and not against, the proliferation of open-source AI. So how do we maneuver this debate, and where does the risk spectrum actually lie with AI? More importantly, how can we contextualize the risk of AI with other systemic risks humankind faces? Such as climate change, risk of nuclear war, and so on and so forth? How can we regulate AI without falling into the trap of regulatory capture—where a select and mighty few benefit from regulation, drowning out the competition in the meantime? Trond Arne Undheim is a Research scholar in Global Systemic Risk, Innovation, and Policy at Stanford University, Venture Partner at Antler, and CEO and co-founder of Yegii, an insight network with experts and knowledge assets on disruption. He is a nonresident Fellow at the Atlantic Council with a portfolio in artificial intelligence, future of work, data ethics, emerging technologies, and entrepreneurship. He is a former director of MIT Startup Exchange and has helped launch over 50 startups. In a previous life, he was an MIT Sloan School of Management Senior Lecturer, WPP Oracle Executive, and EU National Expert. In this episode, Trond and Adel explore the multifaceted risks associated with AI, the cascading risks lens and the debate over the likelihood of runaway AI. Trond shares the role of governments and organizations in shaping AI's future, the need for both global and regional regulatory frameworks, as well as the importance of educating decision-makers on AI's complexities. Trond also shares his opinion on the contrasting philosophies behind open and closed-source AI technologies, the risk of regulatory capture, and more.  Links mentioned in the show: Augmented Lean: A Human-Centric Framework for Managing Frontline Operations by Trond Arne Undheim & Natan LinderFuture Tech: How to Capture Value from Disruptive Industry Trends Trond Arne UndheimFuturized PodcastStanford Cascading Risk StudyCourse: AI Ethics

From the dawn of humanity, decisions, both big and small, have shaped our trajectory. Decisions have built civilizations, forged alliances, and even charted the course of our very evolution. And now, as data & AI become more widespread, the potential upside for better decision making is massive. Yet, like any technology, the true value of data & AI is realized by how we wield it.  We're often drawn to the allure of the latest tools and techniques, but it's crucial to remember that these tools are only as effective as the decisions we make with them. ChatGPT is only as good as the prompt you decide to feed it and what you decide to do with the output. A dashboard is only as good as the decisions that it influences. Even a data science team is only as effective as the value they deliver to the organization.  So in this vast landscape of data and AI, how can we master the art of better decision making? How can we bridge data & AI with better decision intelligence? ​​Cassie Kozyrkov founded the field of Decision Intelligence at Google where, until recently, she served as Chief Decision Scientist, advising leadership on decision process, AI strategy, and building data-driven organizations. Upon leaving Google, Cassie started her own company of which she is the CEO, Data Scientific. In almost 10 years at the company, Cassie personally trained over 20,000 Googlers in data-driven decision-making and AI and has helped over 500 projects implement decision intelligence best practices. Cassie also previously served in Google's Office of the CTO as Chief Data Scientist, and the rest of her 20 years of experience was split between consulting, data science, lecturing, and academia.  Cassie is a top keynote speaker and a beloved personality in the data leadership community, followed by over half a million tech professionals. If you've ever went on a reading spree about AI, statistics, or decision-making, chances are you've encountered her writing, which has reached millions of readers.  In the episode Cassie and Richie explore misconceptions around data science, stereotypes associated with being a data scientist, what the reality of working in data science is, advice for those starting their career in data science, and the challenges of being a data ‘jack-of-all-trades’.  Cassie also shares what decision-science and decision intelligence are, what questions to ask future employers in any data science interview, the importance of collaboration between decision-makers and domain experts, the differences between data science models and their real-world implementations, the pros and cons of generative AI in data science, and much more.  Links mentioned in the Show: Data scientist: The sexiest job of the 22nd centuryThe Netflix PrizeAI Products: Kitchen AnalogyType one, Two & Three Errors in StatisticsCourse: Data-Driven Decision Making for BusinessRadar: Data & AI Literacy...

It's been a year since ChatGPT burst onto the scene. It has given many of us a sense of the power and potential that LLMs hold in revolutionizing the global economy. But the power that generative AI brings also comes with inherent risks that need to be mitigated.  For those working in AI, the task at hand is monumental: to chart a safe and ethical course for the deployment and use of artificial intelligence. This isn't just a challenge; it's potentially one of the most important collective efforts of this decade. The stakes are high, involving not just technical and business considerations, but ethical and societal ones as well. How do we ensure that AI systems are designed responsibly? How do we mitigate risks such as bias, privacy violations, and the potential for misuse? How do we assemble the right multidisciplinary mindset and expertise for addressing AI safety?  Reid Blackman, Ph.D., is the author of “Ethical Machines” (Harvard Business Review Press), creator and host of the podcast “Ethical Machines,” and Founder and CEO of Virtue, a digital ethical risk consultancy. He is also an advisor to the Canadian government on their federal AI regulations, was a founding member of EY’s AI Advisory Board, and a Senior Advisor to the Deloitte AI Institute. His work, which includes advising and speaking to organizations including AWS, US Bank, the FBI, NASA, and the World Economic Forum, has been profiled by The Wall Street Journal, the BBC, and Forbes. His written work appears in The Harvard Business Review and The New York Times. Prior to founding Virtue, Reid was a professor of philosophy at Colgate University and UNC-Chapel Hill. In the episode, Reid and Richie discuss the dominant concerns in AI ethics, from biased AI and privacy violations to the challenges introduced by generative AI, such as manipulative agents and IP issues. They delve into the existential threats posed by AI, including shifts in the job market and disinformation. Reid also shares examples where unethical AI has led to AI projects being scrapped, the difficulty in mitigating bias, preemptive measures for ethical AI and much more.  Links mentioned in the show: Ethical Machines by Reid BlackmanVirtue Ethics ConsultancyAmazon’s Scrapped AI Recruiting ToolNIST AI Risk Management FrameworkCourse: AI EthicsDataCamp Radar: Data & AI Literacy