talk-data.com talk-data.com

Topic

BI

Business Intelligence (BI)

data_visualization reporting analytics

1211

tagged

Activity Trend

111 peak/qtr
2020-Q1 2026-Q1

Activities

1211 activities · Newest first

The Data Product Management In Action podcast, brought to you by Soda and executive producer Scott Hirleman, is a platform for data product management practitioners to share insights and experiences. In Season 01, Episode 005, host Nadiem von Heydebrand (CEO and Co-founder at Mindfuel) sits down with Clemence Chee (VP of Data and Analytics at Babbel). Clemence shares his journey and the unique challenges of data product managment, and the critical role of creating tangible business value and Return On Investment.  About our host Nadiem von Heydebrand: Nadiem is the CEO and Co-Founder of Mindfuel. In 2019, he merged his passion for data science with product management, becoming a thought leader in data product management. Nadiem is dedicated to demonstrating the true value contribution of data. With over a decade of experience in the data industry, Nadiem leverages his expertise to scale data platforms, implement data mesh concepts, and transform AI performance into business performance, delighting consumers at global organizations that include Volkswagen, Munich Re, Allianz, Red Bull, and Vorwerk. Connect with Nadiem on LinkedIn.

About our guest Clemence Chee: With over 10 years as a data and technology enthusiast, Clemence has extensive experience in Venture Development, Operations, and Business Intelligence. Prior to his current role at VP Data & Analytics at Babbel, he spent 7 years at HelloFresh as Global Senior Director of Data and has been fortunate to contribute to and build companies from ideation through pre-seed, Series A-D, IPO, and DAX40. Connect with Clemence on LinkedIn. All views and opinions expressed are those of the individuals and do not necessarily reflect their employers or anyone else.  Join the conversation on LinkedIn  #dataproductmanagementwednesday

Data Modeling with Microsoft Power BI

Data modeling is the single most overlooked feature in Power BI Desktop, yet it's what sets Power BI apart from other tools on the market. This practical book serves as your fast-forward button for data modeling with Power BI, Analysis Services tabular, and SQL databases. It serves as a starting point for data modeling, as well as a handy refresher. Author Markus Ehrenmueller-Jensen, founder of Savory Data, shows you the basic concepts of Power BI's semantic model with hands-on examples in DAX, Power Query, and T-SQL. If you're looking to build a data warehouse layer, chapters with T-SQL examples will get you started. You'll begin with simple steps and gradually solve more complex problems. This book shows you how to: Normalize and denormalize with DAX, Power Query, and T-SQL Apply best practices for calculations, flags and indicators, time and date, role-playing dimensions and slowly changing dimensions Solve challenges such as binning, budget, localized models, composite models, and key value with DAX, Power Query, and T-SQL Discover and tackle performance issues by applying solutions in DAX, Power Query, and T-SQL Work with tables, relations, set operations, normal forms, dimensional modeling, and ETL

The Best Data Warehouse is a Lakehouse

Reynold Xin, Co-founder and Chief Architect at Databricks, presented during Data + AI Summit 2024 on Databricks SQL and its advancements and how to drive performance improvements with the Databricks Data Intelligence Platform.

Speakers: Reynold Xin, Co-founder and Chief Architect, Databricks Pearl Ubaru, Technical Product Engineer, Databricks

Main Points and Key Takeaways (AI-generated summary)

Introduction of Databricks SQL: - Databricks SQL was announced four years ago and has become the fastest-growing product in Databricks history. - Over 7,000 customers, including Shell, AT&T, and Adobe, use Databricks SQL for data warehousing.

Evolution from Data Warehouses to Lakehouses: - Traditional data architectures involved separate data warehouses (for business intelligence) and data lakes (for machine learning and AI). - The lakehouse concept combines the best aspects of data warehouses and data lakes into a single package, addressing issues of governance, storage formats, and data silos.

Technological Foundations: - To support the lakehouse, Databricks developed Delta Lake (storage layer) and Unity Catalog (governance layer). - Over time, lakehouses have been recognized as the future of data architecture.

Core Data Warehousing Capabilities: - Databricks SQL has evolved to support essential data warehousing functionalities like full SQL support, materialized views, and role-based access control. - Integration with major BI tools like Tableau, Power BI, and Looker is available out-of-the-box, reducing migration costs.

Price Performance: - Databricks SQL offers significant improvements in price performance, which is crucial given the high costs associated with data warehouses. - Databricks SQL scales more efficiently compared to traditional data warehouses, which struggle with larger data sets.

Incorporation of AI Systems: - Databricks has integrated AI systems at every layer of their engine, improving performance significantly. - AI systems automate data clustering, query optimization, and predictive indexing, enhancing efficiency and speed.

Benchmarks and Performance Improvements: - Databricks SQL has seen dramatic improvements, with some benchmarks showing a 60% increase in speed compared to 2022. - Real-world benchmarks indicate that Databricks SQL can handle high concurrency loads with consistent low latency.

User Experience Enhancements: - Significant efforts have been made to improve the user experience, making Databricks SQL more accessible to analysts and business users, not just data scientists and engineers. - New features include visual data lineage, simplified error messages, and AI-driven recommendations for error fixes.

AI and SQL Integration: - Databricks SQL now supports AI functions and vector searches, allowing users to perform advanced analysis and query optimizations with ease. - The platform enables seamless integration with AI models, which can be published and accessed through the Unity Catalog.

Conclusion: - Databricks SQL has transformed into a comprehensive data warehousing solution that is powerful, cost-effective, and user-friendly. - The lakehouse approach is presented as a superior alternative to traditional data warehouses, offering better performance and lower costs.

Data + AI Summit Keynote Day 1 - Full
video
by Patrick Wendall (Databricks) , Fei-Fei Li (Stanford University) , Brian Ames (General Motors) , Ken Wong (Databricks) , Ali Ghodsi (Databricks) , Jackie Brosamer (Block) , Reynold Xin (Databricks) , Jensen Huang (NVIDIA)

Databricks Data + AI Summit 2024 Keynote Day 1

Experts, researchers and open source contributors — from Databricks and across the data and AI community gathered in San Francisco June 10 - 13, 2024 to discuss the latest technologies in data management, data warehousing, data governance, generative AI for the enterprise, and data in the era of AI.

Hear from Databricks Co-founder and CEO Ali Ghodsi on building generative AI applications, putting your data to work, and how data + AI leads to data intelligence.

Plus a fireside chat between Ali Ghodsi and Nvidia Co-founder and CEO, Jensen Huang, on the expanded partnership between Nvidia and Databricks to accelerate enterprise data for the era of generative AI

Product announcements in the video include: - Databricks Data Intelligence Platform - Native support for NVIDIA GPU acceleration on the Databricks Data Intelligence Platform - Databricks open source model DBRX available as an NVIDIA NIM microservice - Shutterstock Image AI powered by Databricks - Databricks AI/BI - Databricks LakeFlow - Databricks Mosaic AI - Mosaic AI Agent Framework - Mosaic AI Agent Evaluation - Mosaic AI Tools Catalog - Mosaic AI Model Training - Mosaic AI Gateway

In this keynote hear from: - Ali Ghodsi, Co-founder and CEO, Databricks (1:45) - Brian Ames, General Motors (29:55) - Patrick Wendall, Co-founder and VP of Engineering, Databricks (38:00) - Jackie Brosamer, Head of AI, Data and Analytics, Block (1:14:42) - Fei Fei Li, Professor, Stanford University and Denning Co-Director, Stanford Institute for Human-Centered AI (1:23:15) - Jensen Huang, Co-founder and CEO of NVIDIA with Ali Ghodsi, Co-founder and CEO of Databricks (1:42:27) - Reynold Xin, Co-founder and Chief Architect, Databricks (2:07:43) - Ken Wong, Senior Director, Product Management, Databricks (2:31:15) - Ali Ghodsi, Co-founder and CEO, Databricks (2:48:16)

To build a successful career in data, you need to focus on the right skills. You need to focus on skills that will open doors, make you job-ready, help you attract employers, and put you on the right track toward growing your career. But where should you start? In this episode, Matt Mike will give his thoughts on the most important skills you'll need in your career, and will give you a roadmap for adding them to your tool kit and putting them on display to attract the right kind of attention. You'll leave this session with a solid understanding of which skills you should focus on first, how to tackle them, and with a plan to take action and make progress today. What You'll Learn: Which technical tools you should focus on to build a great data career Why soft skills can be just as important and which ones matter the most How you can build these skills, and put them on display to accelerate your career   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   About our guest: Matt Mike is a multi-passionate Data Analyst and BI Developer who fancies Power BI.

He has been a data analyst for 2 years now and also transitioned into the data field from a non-technical background. This has given him a unique understanding of what it takes to be an analyst from the ground up. Check out Matt's YouTube channel for more! : https://www.youtube.com/@MattMike     Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

Having a strong personal brand is one of the best things you can do to stand out from your competition in today's difficult job market.    In this episode, you'll learn why brand building should be at the top of your list, and more importantly, hear actionable tips that you can use to make progress right away.    We'll be sharing some of the best strategies, actionable advice, and personal anecdotes from two of the best personal brand builders in data, Kate Strachnyi and Kristen Kehrer.

You'll leave with a concrete path to building your brand and accelerating your career, starting today. What You'll Learn: Why personal brands matter more than ever in 2024 What a strong personal brand looks like How to start building your personal brand online   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   About our guests: As the founder of DATAcated, Kate Strachnyi helps companies amplify their brand and expertise in artificial intelligence, machine learning, and data science. Kate is a content creator with over 200k followers across LinkedIn, YouTube, Instagram, and other platforms. She also runs a DATAcated Plus program with 25+ influencers that can be hired to 'make a spash' on social media.  As a marketing and branding expert, Kate has been recognized as a LinkedIn Top Voice in Data Science and Analytics for 2018 and 2019, and as a DataIQ USA100 for 2022. Kate is also the author of ColorWise: A Data Storyteller's Guide to the Intentional Use of Color.  https://www.datacated.com/brand-builder     Kristen Kehrer has been providing innovative & practical statistical modeling solutions in the utilities, healthcare, and eCommerce sectors since 2010. Alongside her professional accomplishments, she achieved recognition as a LinkedIn Top Voice in Data Science & Analytics in 2018. Kristen is also the founder of Data Moves Me, LLC, and has previously served as a faculty member and subject matter expert at the Emeritus Institute of Management and UC Berkeley Ext.

 Kristen lights up on stage and has spoken at conferences like ODSC, DataScienceGO, BI+Analytics Conference, Boye Conference, and Big Data LDN, etc.

She holds a Master of Science degree in Applied Statistics from Worcester Polytechnic Institute and a Bachelor of Science degree in Mathematics.

https://www.datamovesme.com/   Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

In the fast-paced work environments we are used to, the ability to quickly find and understand data is essential. Data professionals can often spend more time searching for data than analyzing it, which can hinder business progress. Innovations like data catalogs and automated lineage systems are transforming data management, making it easier to ensure data quality, trust, and compliance. By creating a strong metadata foundation and integrating these tools into existing workflows, organizations can enhance decision-making and operational efficiency. But how did this all come to be, who is driving better access and collaboration through data? Prukalpa Sankar is the Co-founder of Atlan. Atlan is a modern data collaboration workspace (like GitHub for engineering or Figma for design). By acting as a virtual hub for data assets ranging from tables and dashboards to models & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Slack, BI tools, data science tools and more. A pioneer in the space, Atlan was recognized by Gartner as a Cool Vendor in DataOps, as one of the top 3 companies globally. Prukalpa previously co-founded SocialCops, world leading data for good company (New York Times Global Visionary, World Economic Forum Tech Pioneer). SocialCops is behind landmark data projects including India’s National Data Platform and SDGs global monitoring in collaboration with the United Nations. She was awarded Economic Times Emerging Entrepreneur for the Year, Forbes 30u30, Fortune 40u40, Top 10 CNBC Young Business Women 2016, and a TED Speaker. In the episode, Richie and Prukalpa explore challenges within data discoverability, the inception of Atlan, the importance of a data catalog, personalization in data catalogs, data lineage, building data lineage, implementing data governance, human collaboration in data governance, skills for effective data governance, product design for diverse audiences, regulatory compliance, the future of data management and much more.  Links Mentioned in the Show: AtlanConnect with Prukalpa[Course] Artificial Intelligence (AI) StrategyRelated Episode: Adding AI to the Data Warehouse with Sridhar Ramaswamy, CEO at SnowflakeSign up to RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

Visual Analytics for Dashboards: A Step-by-Step Guide to Principles and Practical Techniques

This book covers the key principles, best practices, and practical techniques for designing and implementing visually compelling dashboards. It explores the various stages of the dashboard development process, from understanding user needs and defining goals, to selecting appropriate visual encodings, designing effective layouts, and employing interactive elements. It also addresses the critical aspect of data storytelling, examining how narratives and context can be woven into dashboards to deliver impactful insights and engage audiences. Visual Analytics for Dashboards is designed to cater to a wide range of readers, from beginners looking to grasp the fundamentals of visual analytics, to seasoned professionals seeking to enhance their dashboard design skills. For different types of readers, such as a data analyst, BI professional, data scientist, or simply someone interested in data visualization, this book aims to equip them with the knowledge and tools necessary to create impactful dashboards. What you’ll learn The principles of data visualization How to create effective dashboards Meet all the requirements for visual analytics/data visualization/dashboard courses Deepen understanding of data presentation and analysis How to use different kinds of tools for data analysis, such as scorecards and key performance indicators Who This Book Is For Business analysts, data analysts, BI professionals, end-users, executives, developers, as well as students in dashboards, data visualizations, and visual analytics courses.

Business Intelligence with Looker Cookbook

Discover the power of Looker for Business Intelligence and data visualization in this comprehensive cookbook. This book serves as your guide to mastering Looker's tools and features, enabling you to transform data into actionable insights. What this Book will help me do Understand Looker's key components, including LookML and dashboards. Explore advanced Looker capabilities, including data modeling and interactivity. Create dynamic dashboards to monitor and present critical metrics effectively. Integrate Looker with additional tools and systems to extend its capabilities. Leverage Looker's tools for fostering data-driven decision-making within your team. Author(s) Khrystyna Grynko is a seasoned data professional with extensive experience in Business Intelligence and analytics. She brings practical insights into how to effectively utilize Looker for real-world applications. Khrystyna is known for her clear, instructional writing style that makes complex topics approachable. Who is it for? This book is an essential resource for business analysts, data analysts, or BI developers looking to expand their expertise in Looker. Suitable for readers with a basic understanding of business intelligence concepts. Ideal for professionals who aim to leverage Looker for creating insightful and interactive data applications to inform business strategy.

Summary

The purpose of business intelligence systems is to allow anyone in the business to access and decode data to help them make informed decisions. Unfortunately this often turns into an exercise in frustration for everyone involved due to complex workflows and hard-to-understand dashboards. The team at Zenlytic have leaned on the promise of large language models to build an AI agent that lets you converse with your data. In this episode they share their journey through the fast-moving landscape of generative AI and unpack the difference between an AI chatbot and an AI agent.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support. Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Ryan Janssen and Paul Blankley about their experiences building AI powered agents for interacting with your data

Interview

Introduction How did you get involved in data? In AI? Can you describe what Zenlytic is and the role that AI is playing in your platform? What have been the key stages in your AI journey?

What are some of the dead ends that you ran into along the path to where you are today? What are some of the persistent challenges that you are facing?

So tell us more about data agents. Firstly, what are data agents and why do you think they're important? How are data agents different from chatbots? Are data agents harder to build? How do you make them work in production? What other technical architectures have you had to develop to support the use of AI in Zenlytic? How have you approached the work of customer education as you introduce this functionality? What are some of the most interesting or erroneous misconceptions that you have heard about what the AI can and can't do? How have you balanced accuracy/trustworthiness with user experience and flexibility in the conversational AI, given the potential for these models to create erroneous responses? What are the most interesting, innovative, or unexpected ways that you have seen your AI agent used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on building an AI agent for business intelligence? When is an AI agent the wrong choice? What do you have planned for the future of AI in the Zenlytic product?

Contact Info

Ryan

LinkedIn

Paul

LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announce

Everyday Data Visualization

Radically improve the quality of your data visualizations by employing core principles of color, typography, chart types, data storytelling, and more. Everyday Data Visualization is a field guide for design techniques that will improve the charts, reports, and data dashboards you build every day. Everything you learn is tool-agnostic, with universal principles you can apply to any data stack. In Everyday Data Visualization you’ll learn important design principles for the most common data visualizations: Harness the power of perception to guide a user’s attention Bring data to life with color and typography Choose the best chart types for your data story Design for interactive visualizations Keep the user’s needs first throughout your projects This book gives you the tools you need to bring your data to life with clarity, precision, and flair. You’ll learn how human brains perceive and process information, wield modern accessibility standards, get the basics of color theory and typography, and more. About the Technology Even mundane presentations like charts, dashboards, and infographics can become engaging and inspiring data stories! This book shows you how to upgrade the visualizations you create every day by improving the layout, typography, color, and accessibility. You’ll discover timeless principles of design that help you highlight important features, compensate for missing information, and interact with live data flows. About the Book Everyday Data Visualization guides you through basic graphic design for the most common types of data visualization. You’ll learn how to enhance charts with color, encourage users to interact and explore data and create visualizations accessible to everyone. Along the way, you’ll practice each new skill as you take a dashboard project from research to publication. What's Inside Bring data to life with color and typography Choose the best chart types for your data story Design interactive visualizations About the Reader For readers experienced with data analysis tools. About the Author Desireé Abbott has over a decade of experience in product analytics, business intelligence, science, design, and software engineering. The technical editor on this book was Michael Petrey. Quotes A delightful blend of data viz principles, guidance, and design tips. The treasure trove of insights I wish I had years ago! - Alli Torban, Author of Chart Spark With vibrant enthusiasm and engaging conversational style, this book shines. - RJ Andrews, data storyteller Elegantly simplifies complex concepts, making them accessible even to beginners. An enlightening journey. - Renato Sinohara, Westwing Group SE Desiree’s approachable writing style makes it easy to dive straight into this book, and you’re in deep before you even know it. I guarantee you’ll learn plenty. - Neil Richards, 5xTableau Visionary, Author of Questions in Dataviz

IBM Storage DS8900F Architecture and Implementation: Updated for Release 9.3.2

This IBM® Redbooks® publication describes the concepts, architecture, and implementation of the IBM Storage DS8900F family. The book provides reference information to assist readers who need to plan for, install, and configure the DS8900F systems. This edition applies to DS8900F systems with IBM Storage DS8000® Licensed Machine Code (LMC) 7.9.30 (bundle version 89.30.xx.x), referred to as Release 9.3. The DS8900F systems are all-flash exclusively, and they are offered as three classes: DS8980F: Analytic Class: The DS8980F Analytic Class offers best performance for organizations that want to expand their workload possibilities to artificial intelligence (AI), Business Intelligence (BI), and machine learning (ML). IBM DS8950F: Agility Class: The Agility Class consolidates all your mission-critical workloads for IBM Z®, IBM LinuxONE, IBM Power, and distributed environments under a single all-flash storage solution. IBM DS8910F: Flexibility Class: The Flexibility Class reduces complexity while addressing various workloads at the lowest DS8900F family entry cost. The DS8900F architecture relies on powerful IBM POWER9™ processor-based servers that manage the cache to streamline disk input/output (I/O), which maximizes performance and throughput. These capabilities are further enhanced by High-Performance Flash Enclosures (HPFE) Gen2. Like its predecessors, the DS8900F supports advanced disaster recovery (DR) solutions, business continuity solutions, and thin provisioning.

Rapid change seems to be the new norm within the data and AI space, and due to the ecosystem constantly changing, it can be tricky to keep up. Fortunately, any self-respecting venture capitalist looking into data and AI will stay on top of what’s changing and where the next big breakthroughs are likely to come from. We all want to know which important trends are emerging and how we can take advantage of them, so why not learn from a leading VC.  Tomasz Tunguz is a General Partner at Theory Ventures, a $235m early-stage venture capital firm. He blogs sat tomtunguz.com & co-authored Winning with Data. He has worked or works with Looker, Kustomer, Monte Carlo, Dremio, Omni, Hex, Spot, Arbitrum, Sui & many others. He was previously the product manager for Google's social media monetization team, including the Google-MySpace partnership, and managed the launches of AdSense into six new markets in Europe and Asia. Before Google, Tunguz developed systems for the Department of Homeland Security at Appian Corporation.  In the episode, Richie and Tom explore trends in generative AI, the impact of AI on professional fields, cloud+local hybrid workflows, data security, and changes in data warehousing through the use of integrated AI tools, the future of business intelligence and data analytics, the challenges and opportunities surrounding AI in the corporate sector. You'll also get to discover Tom's picks for the hottest new data startups. Links Mentioned in the Show: Tom’s BlogTheory VenturesArticle: What Air Canada Lost In ‘Remarkable’ Lying AI Chatbot Case[Course] Implementing AI Solutions in BusinessRelated Episode: Making Better Decisions using Data & AI with Cassie Kozyrkov, Google's First Chief Decision ScientistSign up to RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

IBM Storage FlashSystem 9500 Product Guide for IBM Storage Virtualize 8.6

This IBM® Redpaper® Product Guide describes the IBM Storage FlashSystem® 9500 solution, which is a next-generation IBM Storage FlashSystem control enclosure. It combines the performance of flash and a Non-Volatile Memory Express (NVMe)-optimized architecture with the reliability and innovation of IBM FlashCore® technology and the rich feature set and high availability (HA) of IBM Storage Virtualize. Often, applications exist that are foundational to the operations and success of an enterprise. These applications might function as prime revenue generators, guide or control important tasks, or provide crucial business intelligence, among many other jobs. Whatever their purpose, they are mission critical to the organization. They demand the highest levels of performance, functionality, security, and availability. They also must be protected against the newer threat of cyberattacks. To support such mission-critical applications, enterprises of all types and sizes turn to the IBM Storage FlashSystem 9500. IBM Storage FlashSystem 9500 provides a rich set of software-defined storage (SDS) features that are delivered by IBM Storage Virtualize, including the following examples: Data reduction and deduplication Dynamic tiering Thin-provisioning Snapshots Cloning Replication and data copy services Cyber resilience Transparent Cloud Tiering IBM HyperSwap® including 3-site replication for HA Scale-out and scale-up configurations that further enhance capacity and throughput for better availability This Redpaper applies to IBM Storage Virtualize V8.6.

Yearning for insights from your spreadsheets? The combination of Looker and Google Workspace is your data dream team. In this session you will learn how to automatically generate Google Slides from Looker, how to generate reports from Google Sheets, and bring business intelligence into the flow of work. Leverage AI to find trends and generate summaries in seconds, and build interactive dashboards that respond to your clicks.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

For many modern organizations, managing across hybrid, multicloud, and traditional on-premise applications efficiently has been challenging. Red Hat OpenShift and Google Cloud simplify your cloud journey with flexible and proven containerized compute capabilities.

In this session, we'll cover: -How you can leverage recent OpenShift on Google Cloud innovations -How you can improve business intelligence and insight with Arm and mixed-cluster support -How customers are leveraging OpenShift on Google Cloud to achieve success

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

In this game you will learn to build a BI dashboard with Looker Studio as the front end, powered by BigQuery on the back end, learn to use BigQuery to find data, build a time series model to forecast demand of multiple products using BigQuery ML, and create a basic report in Google Data Studio.

Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.

Join our panel of experts in exploring the cutting-edge realm of Decision Intelligence and discover how it diverges from traditional Data Analytics and Business Intelligence, reshaping the landscape of strategic decision-making. Our panel explores real-world instances where Decision Intelligence catalyzed profound organizational shifts, yielding tangible results. This is a not to miss session examining the future of decision-making and how to harness balance between forecasting and human oversight for maximum efficiency!