talk-data.com talk-data.com

Topic

LLM

Large Language Models (LLM)

nlp ai machine_learning

1405

tagged

Activity Trend

158 peak/qtr
2020-Q1 2026-Q1

Activities

1405 activities · Newest first

Takeaways Feedback is crucial for growth but often lacking. Visual storytelling can enhance understanding. Long explanations may not be effective in busy environments. Tools like Code to Story can simplify complex ideas. People prefer quick, visual information over lengthy texts. The development process can be messy but rewarding. Understanding your audience's time constraints is key. Visual aids can significantly improve communication. Building something understandable is more important than perfection. Let your work speak for itself through visuals.

Blog that shows you how to do this yourself: https://medium.com/towards-artificial-intelligence/how-to-instantly-explain-your-code-with-visuals-powered-by-gpt-4-bc379985f43f Subscribe to my Substack for updates on the course: https://mukundansankar.substack.com/

Artificial Intelligence is transforming the world of analytics, but how much of the job is changing? In this episode, we explore the AI-powered future of analytics: real-time insights, smarter Business Intelligence (BI) tools, and how Excel is turning heads with ChatGPT integration. Will AI replace analysts or supercharge what they already do best? Ravit Jain, Founder of the Ravit Show, will dive into what's staying the same, what's evolving, and what you need to know to keep up. Discover how AI is revolutionizing analytics, from real-time insights to the future of BI tools and Excel's new ChatGPT capabilities. Stay ahead of the curve— uncover how you can thrive in the AI-powered future of analytics! What You'll Learn: How AI-powered Excel is simplifying data wrangling and reporting. What parts of an analyst's role are evolving, and what core skills will always matter. How you can future-proof your analytics career in an AI-driven world.   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   Interested in learning more from Ravit? Check out The Ravit Show!    Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

Summary In this episode, Mukund Sankar shares a personal story about how a midnight craving led him to create an app that helps him reflect on his emotions rather than just satisfy his hunger. He discusses the impact of emotional eating and how technology, specifically GPT-4, can be used to foster self-awareness and emotional health. The conversation transitions into the development of a digital therapist that encourages users to engage with their feelings and patterns of behavior, ultimately promoting personal growth and emotional well-being. Takeaways Mukund reflects on how midnight cravings can lead to deeper emotional insights.He emphasizes that cravings often stem from emotional needs rather than physical hunger.The app he created uses GPT-4 to help users articulate their feelings.Journaling and self-reflection can reduce cravings and improve emotional health.Avoiding hard conversations can lead to increased cravings and emotional distress.The app serves as a non-judgmental space for users to express their feelings.Mukund's experience highlights the importance of communication with oneself.He encourages listeners to build their own tools for emotional reflection.The conversation illustrates the intersection of technology and mental health.Mukund plans to share resources for building similar apps. Blog: https://medium.com/towards-artificial-intelligence/i-was-about-to-order-taco-bell-again-instead-i-built-an-ai-that-talks-me-down-00021d1310e3 Website: Monitor this for a mini course on how to do it yourself. https://mukundansankar.substack.com/

Why speed is all about memory — an exploration of why optimizing memory access is the most important factor in writing performant code. Szymon Ożóg will share his extensive experience in optimizing GPU-based code, especially for large language models (LLMs). Agenda topics include overview of challenges faced, organizational structure to back multiplatform development, shaping the tech stack to achieve the goal, and what's next.

Misconceptions about AI's capabilities and the role of data are everywhere. Many believe AI is a singular, all-knowing entity, when in reality, it's a collection of algorithms producing intelligence-like outputs. Navigating and understanding the history and evolution of AI, from its origins to today's advanced language models is crucial. How do these developments, and misconceptions, impact your daily work? Are you leveraging the right tools for your needs, or are you caught up in the allure of cutting-edge technology without considering its practical application? Andriy Burkov is the author of three widely recognized books, The Hundred-Page Machine Learning Book, The Machine Learning Engineering Book, and recently The Hundred-Page Language Models book. His books have been translated into a dozen languages and are used as textbooks in many universities worldwide. His work has impacted millions of machine learning practitioners and researchers. He holds a Ph.D. in Artificial Intelligence and is a recognized expert in machine learning and natural language processing. As a machine learning expert and leader, Andriy has successfully led dozens of production-grade AI projects in different business domains at Fujitsu and Gartner. Andriy is currently Machine Learning Lead at TalentNeuron. In the episode, Richie and Andriy explore misconceptions about AI, the evolution of AI from the 1950s, the relevance of 20th-century AI research, the role of linear algebra in AI, the resurgence of recurrent neural networks, advancements in large language model architectures, the significance of reinforcement learning, the reality of AI agents, and much more. Links Mentioned in the Show: Andriy’s books: The Hundred-page Machine Learning Book, The Hundred-page Language Models BookTalentNeuronConnect with AndriySkill Track: AI FundamentalsRelated Episode: Unlocking Humanity in the Age of AI with Faisal Hoque, Founder and CEO of SHADOKARewatch sessions from RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

APIs dominate the web, accounting for the majority of all internet traffic. And more AI means more APIs, because they act as an important mechanism to move data into and out of AI applications, AI agents, and large language models (LLMs). So how can you make sure all of these APIs are secure? In this session, we’ll take you through OWASP’s top 10 API and LLM security risks, and show you how to mitigate these risks using Google Cloud’s security portfolio, including Apigee, Model Armor, Cloud Armor, Google Security Operations, and Security Command Center.

Bring your laptop and join us for an interactive demo on how to apply large language models (LLMs) from the Vertex AI Model Garden to a business use case, and learn about best practices for monitoring these models in production. We’ll go through an exercise using Colab Enterprise notebooks and learn how to use out-of-the-box tools to monitor RED (rate, error, duration) metrics, configure alerts, and monitor the rate of successful predictions in order to ensure successful use of a Vertex AI model in production.

Learn how to manage security controls and licenses for thousands of users, and tie it all together with APIs. We’ll show you ways to manage developer access more efficiently, build custom management integrations, and keep your CISO happy at the same time. We’ll also demo the new Gemini Code Assist integration with Apigee, which lets developers use Gemini Code Assist chat to generate context-aware OpenAPI specifications that reuse components from other APIs in their organization for efficiency and reference organizational security standards.

Join us for an in-depth session on Firebase Genkit, an open source framework that simplifies the development of AI-powered applications. Discover how to use the Node.js and Go SDKs to build intelligent chatbots, multimodal content generators, streamlined automation workflows, and agentive experiences. We'll demonstrate how Genkit's unified interface seamlessly integrates Google's Gemini and Imagen models, self-hosted Ollama options, and a variety of popular models from Vertex AI Model Garden. 

Migrating from AWS or Azure to Google Cloud runtimes can feel like navigating a maze of complex services and dependencies. In this session, we’ll explore key considerations for migrating legacy applications, emphasizing the “why not modernize?” approach with a practice guide. We’ll share real-world examples of successful transformations. And we’ll go beyond theory with a live product demo that showcases migration tools, and a code assessment demo powered by Gemini that demonstrates how you can understand and modernize legacy code.

Unlock the power of code execution with Gemini 2.0 Flash! This hands-on lab demonstrates how to generate and run Python code directly within the Gemini API. Learn to use this capability for tasks like solving equations, processing text, and building code-driven applications.

If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!

Personalized predictions can be created by analyzing user clickstream data and using vector embeddings to capture the essence of an entity across multiple dimensions. This establishes relationships between users and items, revealing preferences and interests. BigQuery facilitates batch processing of vector embeddings, which are then fed into Spanner for efficient retrieval of these relationships via vector search. This enables real-time personalized recommendations with sub-ms response times. This solution offers accuracy, scalability, and real-time responsiveness.

In this hands-on lab, you'll explore data with BigQuery's intuitive table explorer and data insight features, enabling you to gain valuable insights without writing SQL queries from scratch. Learn how to generate key insights from order item data, query location tables, and interact with your data seamlessly. By the end, you’ll be equipped to navigate complex datasets and uncover actionable insights quickly and efficiently.

If you register for a Learning Center lab, please ensure that you sign up for a Google Cloud Skills Boost account for both your work domain and personal email address. You will need to authenticate your account as well (be sure to check your spam folder!). This will ensure you can arrive and access your labs quickly onsite. You can follow this link to sign up!

Tired of generic code suggestions? Learn how to customize Gemini Code Assist using your source code repositories. This session covers best practices for generating new code, and retrieving and reusing existing code, with Gemini Code Assist code-customization capabilities. Boost productivity, enforce consistency, and reduce cognitive load with a truly personalized AI coding assistant.

Gemini 2.0 was built for the agentic era – from native tool use to function calling to robust support for multimodal understanding, the new frontier of applications are agentic. Join this session to explore the frontier of agents, where the best opportunities are for developers to build, open research areas to scale to billions of agents, and how to best leverage Gemini.

Did you know that GitHub Copilot lets you use Google Gemini as an AI programming assistant? Learn tips and tricks of prompting, shaping the context space, injecting third-party knowledge sources, and other ways that GitHub developers maximize their (and their team's) use of Gemini in VS Code and other IDEs.

This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.

Unleash the full potential of large language models (LLMs) on your edge devices, even when there’s spotty internet. This session explores a hybrid approach that combines the power of cloud-based LLMs with the efficiency of on-device models. Learn how to intelligently route queries, enabling laptops and mobile phones to perform complex tasks while maintaining snappy performance. View demos of efficient task routing that optimizes for quality and cost to ensure your apps run smoothly, even during network disruptions.