talk-data.com talk-data.com

Topic

Dashboard

data_visualization reporting bi

306

tagged

Activity Trend

23 peak/qtr
2020-Q1 2026-Q1

Activities

306 activities · Newest first

Beyond One Model: Scaling, Orchestrating & Monitoring

Training one model is fun. Running thousands without everything catching fire? That’s the real challenge. In this talk, we’ll show how we — two data scientists turned accidental ML engineers — scaled anomaly detection at Vanderlande. Expect a peek into our orchestration setup, a quick code snippet, a look at our monitoring dashboard and how we scale to a thousand models.

AWS re:Invent 2025 - Build, govern, and share Amazon Quick Suite dashboards with Amazon SageMaker

Learn how to go from finding data in Amazon SageMaker Catalog to building shareable dashboards in Amazon Quick Suite in a single, secure workflow. This session shows how the integration between the next generation of Amazon SageMaker and Amazon Quick Suite brings business intelligence dashboard governance into the same environment where you discover and prepare data. See how to manage access, ensure compliance, and share trusted insights across teams without moving data between systems.

Learn more: More AWS events: https://go.aws/3kss9CP

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

ABOUT AWS: Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

AWSreInvent #AWSreInvent2025 #AWS

Attendees will learn to enable Defender for Cloud, navigate its dashboard, and interpret regulatory compliance and workload protection insights. The labs also cover secure score analysis, security recommendations, inventory management, pricing models, and governance rule assignment. By the end of the series, participants will be equipped to proactively identify and mitigate threats, strengthen cloud security posture, and align with organizational governance standards.

Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.

Discover how AVEVA and Microsoft are shaping the future of Industrial AI. Learn how AVEVA CONNECT, Azure, and the Industrial AI Assistant unlock new value through agentic AI, natural-language dashboard creation, and purpose-built industrial intelligence. See real examples of joint innovation and how AI is transforming engineering, operations, and data across the industrial lifecycle.

Attendees will learn to enable Defender for Cloud, navigate its dashboard, and interpret regulatory compliance and workload protection insights. The labs also cover secure score analysis, security recommendations, inventory management, pricing models, and governance rule assignment. By the end of the series, participants will be equipped to proactively identify and mitigate threats, strengthen cloud security posture, and align with organizational governance standards.

Please RSVP and arrive at least 5 minutes before the start time, at which point remaining spaces are open to standby attendees.

A Women-Led Case Study in Applied Data Analytics with Mariah Marr & Michelle Sullivan

While data analytics is often viewed as a highly technical field, one of its most challenging aspects lies in identifying the right questions to ask. Beyond the expected skills of summarizing data, building visualizations, and generating insights, analysts must also bridge the gap between complex data and non-technical stakeholders.

This presentation features a case study led by two women from the Research and Data Analytics team at the Minnesota Department of Labor and Industry. It illustrates the end-to-end process of transforming raw data to create a fully developed dashboard that delivers actionable insights for the department’s Apprenticeship unit.

We will share key challenges encountered along the way, from handling issues of data quality and accessibility to adapting the tool for the differing needs and expectations of new stakeholders. Attendees will leave with actionable strategies for transforming messy datasets into clear, impactful dashboards that drive smarter decision making.

Building B2B analytics and AI tools that people will actually pay for and use is hard. The reality is, your product won’t deliver ROI if no one’s using it. That’s why first principles thinking says you have to solve the usage problem first.

In this episode, I’ll explain why the key to user adoption is designing with the flow of work—building your solution around the natural workflows of your users to minimize the behavior changes you’re asking them to make. When users clearly see the value in your product, it becomes easier to sell and removes many product-related blockers along the way.

We’ll explore how product design impacts sales, the difference between buyers and users in enterprise contexts, and why challenging the “data/AI-first” mindset is essential. I’ll also share practical ways to align features with user needs, reduce friction, and drive long-term adoption and impact.

If you’re ready to move beyond the dashboard and start building products that truly fit the way people work, this episode is for you.

Highlights/Skip to: 

The core argument: why solving for user adoption first helps demonstrate ROI and facilitate sales in B2B analytics and AI products  (1:34) How showing the value to actual end users—not just buyers—makes it easier to sell your product (2:33) Why designing for outcomes instead of outputs (dashboards, etc) leads to better adoption and long-term product value (8:16) How to “see” beyond users’ surface-level feature requests and solutions so you can solve for the actual, unspoken need—leading to an indispensable product (10:23) Reframing feature requests as design-actionable problems (12:07)  Solving for unspoken needs vs. customer-requested features and functions (15:51) Why “disruption” is the wrong approach for product development (21:19)

Quotes: 

“Customers’ tolerance for poorly designed B2B software has decreased significantly over the last decade. People now expect enterprise tools to function as smoothly and intuitively as the consumer apps they use every day. 

Clunky software that slows down workflows is no longer acceptable, regardless of the data it provides. If your product frustrates users or requires extra effort to achieve results, adoption will suffer.

Even the most powerful AI or analytics engine cannot compensate for a confusing or poorly structured interface. Enterprises now demand experiences that are seamless, efficient, and aligned with real workflows. 

This shift means that product design is no longer a secondary consideration; it is critical to commercial success.  Founders and product leaders must prioritize usability, clarity, and delight in every interaction. Software that is difficult to use increases the risk of churn, lengthens sales cycles, and diminishes perceived value. Products must anticipate user needs and deliver solutions that integrate naturally into existing workflows. 

The companies that succeed are the ones that treat user experience as a strategic differentiator. Ignoring this trend creates friction, frustration, and missed opportunities for adoption and revenue growth. Design quality is now inseparable from product value and market competitiveness.  The message is clear: if you want your product to be adopted, retain customers, and win in the market, UX must be central to your strategy.”

“No user really wants to ‘check a dashboard’ or use a feature for its own sake. Dashboards, charts, and tables are outputs, not solutions. What users care about is completing their tasks, solving their problems, and achieving meaningful results. 

Designing around workflows rather than features ensures your product is indispensable. A workflow-first approach maps your solution to the actual tasks users perform in the real world. 

When we understand the jobs users need to accomplish, we can build products that deliver real value and remove friction. Focusing solely on features or data can create bloated products that users ignore or struggle to use. 

Outputs are meaningless if they do not fit into the context of a user’s work. The key is to translate user needs into actionable workflows and design every element to support those flows. 

This approach reduces cognitive load, improves adoption, and ensures the product's ROI is realized. It also allows you to anticipate challenges and design solutions that make workflows smoother, faster, and more efficient. 

By centering design on actual tasks rather than arbitrary metrics, your product becomes a tool users can’t imagine living without. Workflow-focused design directly ties to measurable outcomes for both end users and buyers. It shifts the conversation from features to value, making adoption, satisfaction, and revenue more predictable.”

“Just because a product is built with AI or powerful data capabilities doesn’t mean anyone will adopt it. Long-term value comes from designing solutions that users cannot live without. It’s about creating experiences that take people from frustration to satisfaction to delight. 

Products must fit into users’ natural workflows and improve their performance, efficiency, and outcomes. Buyers' perceived ROI is closely tied to meaningful adoption by end users. If users struggle, churn rises, and financial impact is diminished, regardless of technical sophistication. 

Designing for delight ensures that the product becomes a positive force in the user’s daily work. It strengthens engagement, reduces friction, and builds customer loyalty. 

High-quality UX allows the product to demonstrate value automatically, without constant explanations or hand-holding. Delightful experiences encourage advocacy, referrals, and easier future sales. 

The real power of design lies in aligning technical capabilities with human behavior and workflow. 

When done correctly, this approach transforms a tool into an indispensable part of the user’s job and a demonstrable asset for the business. 

Focusing on usability, satisfaction, and delight creates long-term adoption and retention, which is the ultimate measure of product success.”

“Your product should enter the user’s work stream like a raft on a river, moving in the same direction as their workflow. Users should not have to fight the current or stop their flow to use your tool. 

Introducing friction or requiring users to change their behavior increases risk, even if the product delivers ROI. The more naturally your product aligns with existing workflows, the easier it is to adopt and the more likely it is to be retained. 

Products that feel intuitive and effortless become indispensable, reducing conversations about usability during demos. By matching the flow of work, your solution improves satisfaction, accelerates adoption, and enhances perceived value. 

Disrupting workflows without careful observation can create new problems, frustrate users, and slow down sales. The goal is to move users from frustration to satisfaction to delight, all while achieving the intended outcomes. 

Designing with the flow of work ensures that every feature, interface element, and interaction fits seamlessly into the tasks users already perform. It allows users to focus on value instead of figuring out how to use the product. 

This alignment is key to unlocking adoption, retaining customers, and building long-term loyalty. 

Products that resist the natural workflow may demonstrate ROI on paper but fail in practice due to friction and low engagement. 

Success requires designing a product that supports the user’s journey downstream without interruption or extra effort. 

When you achieve this, adoption becomes easier, sales conversations smoother, and long-term retention higher.”

LLMs have a lot of hype around them these days. Let’s demystify how they work and see how we can put them in context for data science use. As data scientists, we want to make sure our results are inspectable, reliable, reproducible, and replicable. We already have many tools to help us in this front. However, LLMs provide a new challenge; we may not always be given the same results back from a query. This means trying to work out areas where LLMs excel in, and use those behaviors in our data science artifacts. This talk will introduce you to LLMs, the Chatlas packages, and how they can be integrated into a Shiny to create an AI-powered dashboard (using querychat). We’ll see how we can leverage the tasks LLMs are good at to better our data science products.

Learn how to build a Data Agent for Snowflake Intelligence using Snowflake Cortex AI that can intelligently respond to questions by reasoning over both structured and unstructured data.

We'll use a custom dataset focused on bikes and skis. This dataset is intentionally artificial, ensuring that no external LLM has prior knowledge of it. This gives us a clean and controlled environment to test and evaluate our data agent. By the end of the session, you'll have a working AI-powered agent capable of understanding and retrieving insights across diverse data types — all securely within Snowflake.

This is a Hands-On workshop, all attendees must bring their own laptop to participate.

On today's Promoted Episode of Experiencing Data, I’m talking with Lucas Thelosen, CEO of Gravity and creator of Orion, an AI analyst transforming how data teams work. Lucas was head of PS for Looker, and eventually became Head of Product for Google’s Data and AI Cloud prior to starting his own data product company. We dig into how his team built Orion, the challenge of keeping AI accurate and trustworthy when doing analytical work, and how they’re thinking about the balance of human control with automation when their product acts as a force multiplier for human analysts.

In addition to talking about the product, we also talk about how Gravity arrived at specific enough use cases for this technology that a market would be willing to pay for, and how they’re thinking about pricing in today’s more “outcomes-based” environment. 

Incidentally, one thing I didn’t know when I first agreed to consider having Gravity and Lucas on my show was that Lucas has been a long-time proponent of data product management and operating with a product mindset. In this episode, he shares the “ah-hah” moment where things clicked for him around building data products in this manner. Lucas shares how pivotal this moment was for him, and how it helped accelerate his career from Looker to Google and now Gravity.

If you’re leading a data team, you’re a forward-thinking CDO, or you’re interested in commercializing your own analytics/AI product, my chat with Lucas should inspire you!  

Highlights/ Skip to:

Lucas’s breakthrough came when he embraced a data product management mindset (02:43) How Lucas thinks about Gravity as being the instrumentalists in an orchestra, conducted by the user (4:31) Finding product-market fit by solving for a common analytics pain point (8:11) Analytics product and dashboard adoption challenges: why dashboards die and thinking of analytics as changing the business gradually (22:25) What outcome-based pricing means for AI and analytics (32:08) The challenge of defining guardrails and ethics for AI-based analytics products [just in case somebody wants to “fudge the numbers”] (46:03) Lucas’ closing thoughts about what AI is unlocking for analysts and how to position your career for the future  (48:35)

Special Bonus for DPLC Community Members Are you a member of the Data Product Leadership Community? After our chat, I invited Lucas to come give a talk about his journey of moving from “data” to “product” and adopting a producty mindset for analytics and AI work. He was more than happy to oblige. Watch for this in late 2025/early 2026 on our monthly webinar and group discussion calendar.

Note: today’s episode is one of my rare Promoted Episodes. Please help support the show by visiting Gravity’s links below:

Quotes from Today’s Episode “The whole point of data and analytics is to help the business evolve. When your reports make people ask new questions, that’s a win. If the conversations today sound different than they did three months ago, it means you’ve done your job, you’ve helped move the business forward.” — Lucas 

“Accuracy is everything. The moment you lose trust, the business, the use case, it's all over. Earning that trust back takes a long time, so we made accuracy our number one design pillar from day one.” — Lucas 

“Language models have changed the game in terms of scale. Suddenly, we’re facing all these new kinds of problems, not just in AI, but in the old-school software sense too. Things like privacy, scalability, and figuring out who’s responsible.” — Brian

“Most people building analytics products have never been analysts, and that’s a huge disadvantage. If data doesn’t drive action, you’ve missed the mark. That’s why so many dashboards die quickly.” — Lucas

“Re: collecting feedback so you know if your UX is good: I generally agree that qualitative feedback is the best place to start, not analytics [on your analytics!] Especially in UX, analytics measure usage aspects of the product, not the subject human experience. Experience is a collection of feelings and perceptions about how something went.” — Brian

Links

Gravity: https://www.bygravity.com LinkedIn: https://www.linkedin.com/in/thelosen/ Email Lucas and team: [email protected]

Scaling data transformation: Siemens DI approach with dbt

Siemens Data Cloud runs over 1500 dbt Platform projects across teams and domains. But more projects can mean more silos and less visibility. Because dbt is designed to be project-scoped, getting a birds-eye view isn’t easy. That’s where the dbt Platform Admin API comes in. We’ll show how we used it to extract metadata and build a unified monitoring dashboard. You’ll learn how to track deployments, spot anomalies, and compare project health across your dbt landscape.

In this course, learn how to manage and monitor data platform costs using dbt's built-in tools. We’ll cover how to surface warehouse usage data, set up basic monitoring, and apply rule-based recommendations to optimize performance. You’ll also explore how cost insights fit naturally into the developer workflow—equipping you to make smarter decisions without leaving dbt. This course is for analytics engineers, data analysts, and data platform owners who have a foundational understanding of dbt and want to build more cost-effective data pipelines. Using these cost management and orchestration strategies, the internal dbt Labs Analytics team achieved significant savings: Our cloud compute bill was reduced by 9% by simply implementing dbt Fusion and state-aware orchestration. By understanding the impact of models on platform costs, the team reduced the number of models built in scheduled jobs by 35% and shaved 20% off of job execution times. After this course, you will be able to: Articulate how dbt development patterns impact data platform costs. Configure dbt Cloud to monitor warehouse compute spend. Use the dbt Cost Management dashboard to identify high-cost models and jobs. Apply specific optimization techniques, from materializations to advanced data modeling patterns, to reduce warehouse costs. Implement proactive strategies like dbt Fusion and state-aware orchestration to prevent future cost overruns. Prerequisites for this course include: dbt fundamentals What to bring: You will need to bring your own laptop to complete the hands-on exercises. We will provide all the other sandbox environments for dbt and data platform. Duration: 2 hours Fee: $200 Trainings and certifications are not offered separately and must be purchased with a Coalesce pass Trainings and certifications are not available for Coalesce Online passes

In this episode, I’m exploring the mindset shift data professionals need to make when moving into analytics and AI data product management. From how to ask the right questions to designing for meaningful adoption, I share four key ways to think more like a product manager, and less like a deliverables machine, so your data products earn applause instead of a shoulder shrug.

Highlights/ Skip to:

Why shift to analytics and AI data product management (00:34) From accuracy to impact and redefining success with AI and analytical data products  (01:59) Key Idea 1: Moving from question asker (analyst) to problem seeker (product) (04:31) Key Idea 2: Designing change management into solutions; planning for adoption starts in the design phase (12:52) Key Idea 3: Creating tools so useful people can’t imagine working without them. (26:23) Key Idea 4: Solving for unarticulated needs vs. active needs (34:24)

Quotes from Today’s Episode “Too many analytics teams are rewarded for accuracy instead of impact. Analysts give answers, and product people ask questions.The shift from analytics to product thinking isn’t about tools or frameworks, it’s about curiosity.It’s moving from ‘here’s what the data says’ to ‘what problem are we actually trying to solve, and for whom?’That’s where the real leverage is, in asking better questions, not just delivering faster answers.”

“We often mistake usage for success.Adoption only matters if it’s meaningful adoption. A dashboard getting opened a hundred times doesn’t mean it’s valuable... it might just mean people can’t find what they need.Real success is when your users say, ‘I can’t imagine doing my job without this.’That’s the level of usefulness we should be designing for.”

“The most valuable insights aren’t always the ones people ask for. Solving active problems is good, it’s necessary. But the big unlock happens when you start surfacing and solving latent problems, the ones people don’t think to ask for.Those are the moments when users say, ‘Oh wow, that changes everything.’That’s how data teams evolve from service providers to strategic partners.”

“Here’s a simple but powerful shift for data teams: know who your real customer is. Most data teams think their customer is the stakeholder who requested the work… But the real customer is the end user whose life or decision should get better because of it. When you start designing for that person, not just the requester, everything changes: your priorities, your design, even what you choose to measure.”

Links

Need 1:1 help to navigate these questions and align your data product work to your career? Explore my new Cross-Company Group Coaching at designingforanalytics.com/groupcoaching

For peer support: the Data Product Leadership Community where peers are experimenting with these approaches. designingforanalytics.com/community

This session will focus on how organizations are extracting significant business value by democratizing their data and optimizing resources through the Snowflake AI Data Cloud. The first part of the presentation will showcase how Snowflake helps customers craft compelling value stories for diverse AI use cases and strategic migrations, alongside best practices for optimizing cloud spend. The second part will feature a conversation highlighting how a leading enterprise overcame the common challenges of data silos and dashboard sprawl by simplifying processes with Snowflake AI capabilities. Attendees will learn actionable strategies for accelerating their AI journey and achieving measurable impact.

Migrating your BI platform sounds daunting — especially when you’re staring down hundreds of dashboards, years of legacy content, and a hard deadline. At Game Lounge, we made the leap from Looker to Omni, migrating over 800 dashboards in under three months — without disrupting the business.

In this session, we’ll walk through the practical playbook behind our successful migration: how we scoped the project, prioritised what mattered most, and moved quickly without compromising quality. We’ll share how we phased the migration, reduced dashboard sprawl by over 80%, and leaned on Omni’s AI-assisted features to accelerate setup and streamline cleanup.

We’ll also touch on how we kept quality high post-migration — introducing initiatives like dashboard verification to ensure lasting data trust. And we’ll share what happened next, with over 140 employees now using data to inform decisions every day.

Whether you’re planninga migration or trying to make sense of legacy BI sprawl, this session offers honest lessons, practical frameworks, and time-saving tips to help your team move fast and build smarter.

Analytical Data Product success is traditionally measured with classic reliability metrics. If we were ambitious, we might track user engagement by dashboard views or self-serve activity; they are blunt, woolly indicators at best. The real goal was always to enable better decisions, but we often struggle to measure whether our data products actually help. Conversational BI changes this equation. Now we can see the exact questions users are asking, what follow-ups they need, and where the data model delights or frustrates them. This creates a richer feedback loop than ever before, but it also puts our data model front and centre, exposed directly to business users in a way that makes design quality impossible to hide.

This session will recap the foundations of good data product design, then dive into what conversational BI means for analytics teams. How do we design models that give the best foundation? How can we capture and interpret this new stream of usage feedback? What does success look like? We'll answer all of these questions and more.

Analytics engineers are at a crossroads. Back in 2018, dbt paved the way for for this new kind of data professional, people who had technical ability and could understand business context. But here's the thing: AI is automating traditional tasks like pipeline building and dashboard creation. So then what happens to analytics engineers? They don't disappear - they evolve.

The same skills that made analytics engineers valuable also make them perfect for a new role I'm calling 'Analytics Intelligence Engineers.' Instead of writing SQL, they're writing the context that makes AI actually useful for business users.

In this talk, I'll show you what this evolution looks like day-to-day. We'll explore building semantic layers, crafting AI context, and measuring AI performance - all through real examples using Lightdash. You'll see how the work shifts from data plumbing to data intelligence, and walk away with practical tips for making AI tools more effective in your organization. Whether you're an analytics engineer wondering about your future or a leader planning your data strategy, this session will help you understand where the field is heading and how to get there.

Penguin Random House, the world’s largest trade book publisher, relies on data to power every part of its global business, from supply chain operations to editorial workflows and royalty reconciliation. As the complexity of PRH’s dbt pipelines grew, manual checks and brittle tests could no longer keep pace. The Data Governance team knew they needed a smarter, scalable approach to ensure trusted data.

In this session, Kerry Philips, Head of Data Governance at Penguin Random House, will reveal how the team transformed data quality using Sifflet’s observability platform. Learn how PRH integrated column-level lineage, business-rule-aware logic, and real-time alerts into a single workspace, turning fragmented testing into a cohesive strategy for trust, transparency, and agility.

Attendees will gain actionable insights on:

- Rapidly deploying observability without disrupting existing dbt workflows

- Encoding business logic into automated data tests

- Reducing incident resolution times and freeing engineers to innovate

- Empowering analysts to act on data with confidence

If you’ve ever wondered how a company managing millions of ISBNs ensures every dashboard tells the truth, this session offers a behind-the-scenes look at how data observability became PRH’s newest bestseller.

Following on from the Building consumable data products keynote, we will dive deeper into the interactions around the data product catalog, to show how the network effect of explicit data sharing relationships starts to pay dividends to the participants. Such as:

For the product consumer:

• Searching for products, understanding content, costs, terms and conditions, licenses, quality certifications etc

• Inspecting sample data, choosing preferred data format, setting up a secure subscription, and seeing data provisioned into a database from the product catalog.

• Providing feedback and requesting help

• Reviewing own active subscriptions

• Understanding the lineage behind each product along with outstanding exceptions and future plans

For the product manager/owner:

• Setting up a new product, creating a new release of an existing product and issuing a data correction/restatement

• Reviewing a product’s active subscriptions and feedback/requests from consumers

• Interacting with the technical teams on pipeline implementations along with issues and proposed enhancements

• For the data governance team

• Viewing the network of dependencies between data products (the data mesh) to understand the data value chains and risk concentrations

• Reviewing a dashboard of metrics around the data products including popularity, errors/exceptions, subscriptions, interaction

• Show traceability from a governance policy relating to, say data sovereignty or data privacy to the product implementations.

• Building trust profiles for producers and consumers

The aim of the demonstrations and discussions is to explore the principles and patterns relating to data products, rather than push a particular implementation approach.

Having said that, all of the software used in the demonstrations is open source. Principally this is Egeria, Open Lineage and Unity Catalog from the Linux Foundation, plus Apache Airflow, Apache Kafka and Apache SuperSet from the Apache Software Foundation.  

Videos of the demonstrations will be available on YouTube after the conference and the complete demo software can be downloaded and run on a laptop so you can share your experiences with your teams after the event.