talk-data.com talk-data.com

Topic

Dashboard

data_visualization reporting bi

306

tagged

Activity Trend

23 peak/qtr
2020-Q1 2026-Q1

Activities

306 activities · Newest first

Summary The technological and social ecosystem of data engineering and data management has been reaching a stage of maturity recently. As part of this stage in our collective journey the focus has been shifting toward operation and automation of the infrastructure and workflows that power our analytical workloads. It is an encouraging sign for the industry, but it is still a complex and challenging undertaking. In order to make this world of DataOps more accessible and manageable the team at Nexla has built a platform that decouples the logical unit of data from the underlying mechanisms so that you can focus on the problems that really matter to your business. In this episode Saket Saurabh (CEO) and Avinash Shahdadpuri (CTO) share the story behind the Nexla platform, discuss the technical underpinnings, and describe how their concept of a Nexset simplifies the work of building data products for sharing within and between organizations.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Schema changes, missing data, and volume anomalies caused by your data sources can happen without any advanced notice if you lack visibility into your data-in-motion. That leaves DataOps reactive to data quality issues and can make your consumers lose confidence in your data. By connecting to your pipeline orchestrator like Apache Airflow and centralizing your end-to-end metadata, Databand.ai lets you identify data quality issues and their root causes from a single dashboard. With Databand.ai, you’ll know whether the data moving from your sources to your warehouse will be available, accurate, and usable when it arrives. Go to dataengineeringpodcast.com/databand to sign up for a free 30-day trial of Databand.ai and take control of your data quality today. We’ve all been asked to help with an ad-hoc request for data by the sales and marketing team. Then it becomes a critical report that they need updated every week or every day. Then what do you do? Send a CSV via email? Write some Python scripts to automate it? But what about incremental sync, API quotas, error handling, and all of the other details that eat up your time? Today, there is a better way. With Census, just write SQL or plug in your dbt models and start syncing your cloud warehouse to SaaS applications like Salesforce, Marketo, Hubspot, and many more. Go to dataengineeringpodcast.com/census today to get a free 14-day trial. Your host is Tobias Macey and today I’m interviewing Saket Saurabh and Avinash Shahdadpuri about Nexla, a platform for powering data operations and sharing within and across businesses

Interview

Introduction How did you get involved in the area of data management? Can you describe what Nexla is and the story behind it? What are the major problems that Nexla is aiming to solve?

What are the components of a data platform that Nexla might replace?

What are the use cases and benefits of being able to publish data sets for use outside and across organizations? What are the different elements involved in implementing DataOps? How is the Nexla platform implemented?

What have been the most comple engineering challenges? How has the architecture changed or evolved since you first began working on it? What are some of the assumpt

Summary A major concern that comes up when selecting a vendor or technology for storing and managing your data is vendor lock-in. What happens if the vendor fails? What if the technology can’t do what I need it to? Compilerworks set out to reduce the pain and complexity of migrating between platforms, and in the process added an advanced lineage tracking capability. In this episode Shevek, CTO of Compilerworks, takes us on an interesting journey through the many technical and social complexities that are involved in evolving your data platform and the system that they have built to make it a manageable task.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Schema changes, missing data, and volume anomalies caused by your data sources can happen without any advanced notice if you lack visibility into your data-in-motion. That leaves DataOps reactive to data quality issues and can make your consumers lose confidence in your data. By connecting to your pipeline orchestrator like Apache Airflow and centralizing your end-to-end metadata, Databand.ai lets you identify data quality issues and their root causes from a single dashboard. With Databand.ai, you’ll know whether the data moving from your sources to your warehouse will be available, accurate, and usable when it arrives. Go to dataengineeringpodcast.com/databand to sign up for a free 30-day trial of Databand.ai and take control of your data quality today. We’ve all been asked to help with an ad-hoc request for data by the sales and marketing team. Then it becomes a critical report that they need updated every week or every day. Then what do you do? Send a CSV via email? Write some Python scripts to automate it? But what about incremental sync, API quotas, error handling, and all of the other details that eat up your time? Today, there is a better way. With Census, just write SQL or plug in your dbt models and start syncing your cloud warehouse to SaaS applications like Salesforce, Marketo, Hubspot, and many more. Go to dataengineeringpodcast.com/census today to get a free 14-day trial. Your host is Tobias Macey and today I’m interviewing Shevek about Compilerworks and his work on writing compilers to automate data lineage tracking from your SQL code

Interview

Introduction How did you get involved in the area of data management? Can you describe what Compilerworks is and the story behind it? What is a compiler?

How are you applying compilers to the challenges of data processing systems?

What are some use cases that Compilerworks is uniquely well suited to? There are a number of other methods and systems available for tracking and/or computing data lineage. What are the benefits of the approach that you are taking with Compilerworks? Can you describe the design and implementation of the Compilerworks platform?

How has the system changed or evolved since you first began working on it?

What programming languages and SQL dialects do you currently support?

Which have been the most challenging to work with? How do you handle verification/validation of the algebraic representation of SQL code given the variability of implementations and the flexibility of the specification?

Can you talk through the process of getting Compilerworks

Summary Every organization needs to be able to use data to answer questions about their business. The trouble is that the data is usually spread across a wide and shifting array of systems, from databases to dashboards. The other challenge is that even if you do find the information you are seeking, there might not be enough context available to determine how to use it or what it means. Castor is building a data discovery platform aimed at solving this problem, allowing you to search for and document details about everything from a database column to a business intelligence dashboard. In this episode CTO Amaury Dumoulin shares his perspective on the complexity of letting everyone in the company find answers to their questions and how Castor is designed to help.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management You listen to this show to learn about all of the latest tools, patterns, and practices that power data engineering projects across every domain. Now there’s a book that captures the foundational lessons and principles that underly everything that you hear about here. I’m happy to announce I collected wisdom from the community to help you in your journey as a data engineer and worked with O’Reilly to publish it as 97 Things Every Data Engineer Should Know. Go to dataengineeringpodcast.com/97things today to get your copy! When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Are you bored with writing scripts to move data into SaaS tools like Salesforce, Marketo, or Facebook Ads? Hightouch is the easiest way to sync data into the platforms that your business teams rely on. The data you’re looking for is already in your data warehouse and BI tools. Connect your warehouse to Hightouch, paste a SQL query, and use their visual mapper to specify how data should appear in your SaaS systems. No more scripts, just SQL. Supercharge your business teams with customer data using Hightouch for Reverse ETL today. Get started for free at dataengineeringpodcast.com/hightouch. Have you ever had to develop ad-hoc solutions for security, privacy, and compliance requirements? Are you spending too much of your engineering resources on creating database views, configuring database permissions, and manually granting and revoking access to sensitive data? Satori has built the first DataSecOps Platform that streamlines data access and security. Satori’s DataSecOps automates data access controls, permissions, and masking for all major data platforms such as Snowflake, Redshift and SQL Server and even delegates data access management to business users, helping you move your organization from default data access to need-to-know access. Go to dataengineeringpodcast.com/satori today and get a $5K credit for your next Satori subscription. Your host is Tobias Macey and today I’m interviewing Amaury Dumoulin about Castor, a managed platform for easy data cataloging and discovery

Interview

Introduction How did you get involved in the area of data management? Can you describe what Castor is and the story behind it? The market for data catalogues is nascent but growing fast. What are the broad categories for the different products and projects in the space? What do you see as the core features that are required to be competitive?

In what ways has that changed in

Mastering Tableau 2021 - Third Edition

Tableau 2021 brings a wide range of tools and techniques for mastering data visualization and business intelligence. In this book, you will delve into the advanced methodologies to fully utilize Tableau's capabilities. Whether you're dealing with geo-spatial, time-series analytics, or complex dashboards, this resource provides expertise through real-world data challenges. What this Book will help me do Draw connections between multiple databases and create insightful Tableau dashboards. Master advanced data visualization techniques that lead to impactful storytelling. Understand Tableau's integration with programming languages such as Python and R. Analyze datasets with time-series and geo-spatial methods to gain predictive insights. Leverage Tableau Prep Builder for efficient data cleaning and transformation processes. Author(s) Marleen Meier and David Baldwin are seasoned professionals in business intelligence and data analytics. They bring years of practical experience and have helped numerous organizations worldwide transform their data visualization strategies using Tableau. Their collaborative approach ensures a comprehensive, beginner to advanced learning experience. Who is it for? This book is perfect for business intelligence analysts, data analysts, and industry professionals who are already familiar with Tableau's basics and wish to expand their knowledge. It provides advanced techniques and implementations of Tableau for improving data storytelling and dashboard performance. Readers seeking to connect Tableau with external programming tools will also greatly benefit from this guide.

Pro Power BI Theme Creation: JSON Stylesheets for Automated Dashboard Formatting

Use JSON theme files to standardize the look of Power BI dashboards and reports. This book shows how you can create theme files using the Power BI Desktop application to define high-level formatting attributes for dashboards as well as how to tailor detailed formatting specifications for individual dashboard elements in JSON files. Standardize the look of your dashboards and apply formatting consistently over all your reports. The techniques in this book provide you with tight control over the presentation of all aspects of the Power BI dashboards and reports that you create. Power BI theme files use JSON (JavaScript Object Notation) as their structure, so the book includes a brief introduction to JSON as well as how it applies to Power BI themes. The book further includes a complete reference to all the current formatting definitions and JSON structures that are at your disposal for creating JSON theme files. Finally, the book includes dozens of theme files, from the simple to the most complex, that you can adopt and adapt to suit your own requirements. What You Will Learn Produce designer output without manually formatting every individual visual in a Power BI dashboard Standardize presentation for families of dashboard types Switch presentation styles in a couple of clicks Save dozens, or hundreds, of hours laboriously formatting dashboards Define enterprise-wide presentation standards Retroactively apply standard styles to existing dashboards Who This Book Is For Power BI users who want to save time by defining standardized formatting for their dashboards and reports, IT professionals who want to create corporate standards of dashboard presentation, and marketing and communication specialists who want to set organizational standards for dashboard delivery

Interactive Dashboards and Data Apps with Plotly and Dash

This book, "Interactive Dashboards and Data Apps with Plotly and Dash", is a practical guide to building dynamic dashboards and applications using the Dash Python framework. It covers creating visualizations, integrating interactive controls, and deploying the apps, all without requiring JavaScript expertise. What this Book will help me do Master creating interactive data dashboards using Dash and Plotly. Understand how to integrate controls such as sliders and dropdowns into apps. Learn to use Plotly Express for visually representing data with ease. Develop capabilities to deploy a fully functional web app for data interaction. Understand how to use multi-page configurations and URLs for advanced apps. Author(s) None Dabbas is a seasoned Python developer with extensive expertise in data visualization and full-stack development. Drawing from real-world experience, None brings a practical approach to teaching, ensuring that learners understand not only how to build applications but why the approach works. Who is it for? This book is ideal for data analysts, engineers, and developers looking to enhance their visualization capabilities. If you are familiar with Python and have basic HTML skills, you will find this book accessible and rewarding. Beginners looking to explore advanced dashboard creation without JavaScript will also appreciate the clear approach.

podcast_episode
by Dan Becker (decision.ai) , Adel (DataFramed)

In this episode of DataFramed, Adel speaks with Dan Becker, CEO of decision.ai and founder of Kaggle Learn on the intersection of decision sciences and AI, and best practices when aligning machine learning to business value.

Throughout the episode, Dan deep-dives into his background, how he reached the top of a Kaggle competition, the difference between machine learning in a Kaggle competition and the real world, the role of empathy when aligning machine learning to business value, the importance of decisions sciences when maximizing the value of machine learning in production, and more. 

Links:

Follow Dan on TwitterFollow Dan on LinkedInWhat 70% of data science learners do wrongCheck out Dan’s course on DataCampdecision.aiDan’s climate dashboard

Summary The reason for collecting, cleaning, and organizing data is to make it usable by the organization. One of the most common and widely used methods of access is through a business intelligence dashboard. Superset is an open source option that has been gaining popularity due to its flexibility and extensible feature set. In this episode Maxime Beauchemin discusses how data engineers can use Superset to provide self service access to data and deliver analytics. He digs into how it integrates with your data stack, how you can extend it to fit your use case, and why open source systems are a good choice for your business intelligence. If you haven’t already tried out Superset then this conversation is well worth your time. Give it a listen and then take it for a test drive today.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask. RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today. Your host is Tobias Macey and today I’m interviewing Max Beauchemin about Superset, an open source platform for data exploration, dashboards, and business intelligence

Interview

Introduction How did you get involved in the area of data management? Can you start by describing what Superset is? Superset is becoming part of the reference architecture for a modern data stack. What are the factors that have contributed to its popularity over other tools such as Redash, Metabase, Looker, etc.? Where do dashboarding and exploration tools like Superset fit in the responsibilities and workflow of a data engineer? What are some of the challenges that Superset faces in being performant when working with large data sources?

Which data sources have you found to be the most challenging to work with?

What are some anti-patterns that users of Superset mig

Welcome to another exciting masterclass! Today I'm taking you behind the scenes of how two women leaders on our BI Brainz team, created not one, but two of our most downloaded dashboard templates ever! To share these juicy details, we are joined by Raquel Seville, CEO of BI Brainz Caribbean, and Anna Ria, our very own Head of Design and well-known BI Data Storytelling Accelerator trainer. In the first part, you'll hear from Raquel, as she explains how she used our BI Data Storytelling Framework to transform a 271-page annual report into a simple, insightful Power BI Stock Dashboard template that got a thumbs-up from the company's CEO on social media. Then, we jump into part two with Anna, as she shares the unique inspiration behind our now-famous Analytics Design Guide template and why she believes it became one of the most downloaded templates in BI Brainz history, with over 600 downloads in the first 48 hours! In this episode, you'll learn: [0:05:33] What Raquel has learned as a woman in a male-dominated industry. [0:06:10] How she built a single view dashboard from a 271-page annual report by focusing on a clear goal. [0:10:16] Raquel's idea to create the Stock Data Viz Template, a free Power BI Dashboard. [0:28:27] What the Analytics Design Guide (or ADG) actually is: a set of guidelines that anybody can use. [0:30:46] Who will benefit from using the ADG, from novices to dashboard wizards! [0:34:08] How ADG can help users that are "over creative" by honing in on specific areas. [0:35:10] The story-driven analytics that drive the ADG and why Anna believes that storytelling is crucial for success. For full show notes, and the links mentioned visit: https://bibrainz.com/podcast/79  Enjoyed the Show?  Please leave us a review on iTunes.

Beginning Power Apps: The Non-Developer's Guide to Building Business Applications

Transform the way your business works with easy-to-build apps. With this updated and expanded second edition, you can build business apps that work with your company's systems and databases, without having to enlist the expertise of costly, professionally trained software developers. In this new edition, business applications expert Tim Leung offers step-by-step guidance on how you can improve all areas of your business. He shows how you can replace manual or paper processes with modern apps that run on phone or tablet devices. For administrative and back-office operations, he covers how to build apps with workflow and dashboard capabilities. To facilitate collaboration with customers and clients, you’ll learn how to build secure web portals with data entry capabilities, including how to customize those portals with code. This hands-on new edition has 10 new chapters—including coverage on model-driven and portal apps, artificial intelligence, building components using the Power Apps Component Framework, using PowerShell for administration, and more—complete with context, explanatory screenshots, and non-technical terminology. What You Will Learn Create offline capable mobile apps and responsive web apps Carry out logic, data access, and data entry through formulas Embellish apps with charting, file handling, photo, barcode, and location features Set up Common Data Service, SharePoint, and SQL data sources Use AI to predict outcomes, recognize images, and analyze sentiment Integrate apps with external web services and automate tasks with Power Automate Build reusable code and canvas components, make customizations with JavaScript Transfer apps and data, and secure, administer, and monitor Power Apps environments Who This Book Is For Beginners and non-developers, and assumes no prior knowledge of Power Apps

Summary Data governance is a term that encompasses a wide range of responsibilities, both technical and process oriented. One of the more complex aspects is that of access control to the data assets that an organization is responsible for managing. The team at Immuta has built a platform that aims to tackle that problem in a flexible and maintainable fashion so that data teams can easily integrate authorization, data masking, and privacy enhancing technologies into their data infrastructure. In this episode Steve Touw and Stephen Bailey share what they have built at Immuta, how it is implemented, and how it streamlines the workflow for everyone involved in working with sensitive data. If you are starting down the path of implementing a data governance strategy then this episode will provide a great overview of what is involved.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise. When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Feature flagging is a simple concept that enables you to ship faster, test in production, and do easy rollbacks without redeploying code. Teams using feature flags release new software with less risk, and release more often. ConfigCat is a feature flag service that lets you easily add flags to your Python code, and 9 other platforms. By adopting ConfigCat you and your manager can track and toggle your feature flags from their visual dashboard without redeploying any code or configuration, including granular targeting rules. You can roll out new features to a subset or your users for beta testing or canary deployments. With their simple API, clear documentation, and pricing that is independent of your team size you can get your first feature flags added in minutes without breaking the bank. Go to dataengineeringpodcast.com/configcat today to get 35% off any paid plan with code DATAENGINEERING or try out their free forever plan. You invest so much in your data infrastructure – you simply can’t afford to settle for unreliable data. Fortunately, there’s hope: in the same way that New Relic, DataDog, and other Application Performance Management solutions ensure reliable software and keep application downtime at bay, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo’s end-to-end Data Observability Platform monitors and alerts for data issues across your data warehouses, data lakes, ETL, and business intelligence. The platform uses machine learning to infer and learn your data, proactively identify data issues, assess its impact through lineage, and notify those who need to know before it impacts the business. By empowering data teams with end-to-end data reliability, Monte Carlo helps organizations save time, increase revenue, and restore trust in their data. Visit dataengineeringpodcast.com/montecarlo today to request a demo and see how Monte Carlo delivers data observability across your data inf

Google Data Studio for Beginners: Start Making Your Data Actionable

Google Data Studio is becoming a go-to tool in the analytics community. All business roles across the industry benefit from foundational knowledge of this now-essential technology, and Google Data Studio for Beginners is here to provide it. Release your locked-up data and turn it into beautiful, actionable, and shareable reports that can be consumed by experts and novices alike. Authors Grant Kemp and Gerry White begin by walking you through the basics, such how to create simple dashboards and interactive visualizations. As you progress through Google Data Studio for Beginners, you will build up the knowledge necessary to blend multiple data sources and create comprehensive marketing dashboards. Some intermediate features such as calculated fields, cleaning up data, and data blending to build powerhouse reports are featured as well. Presenting your data in client-ready, digestible forms is a key factor that many find to be a roadblock, and this book will help strengthen this essential skill in your organization. Centralizing the power from sources such as Google Analytics, online surveys, and a multitude of other popular data management tools puts you as a business leader and analyzer ahead of the rest. Your team as a whole will benefit from Google Data Studio for Beginners, because by using these tools, teams can collaboratively work on data to build their understanding and turn their data into action. Data Studio is quickly solidifying itself as the industry standard, and you don’t want to miss this essential guide for excelling in it. What You Will Learn Combine various data sources to create great looking and actionable visualizations Reuse and modify other dashboards that have been created by industry pros Use intermediate features such as calculated fields and data blending to build powerhouse reports Who This Book Is For Users looking to learn Google Analytics, SEO professionals, digital marketers, and other business professionals who want to mine their data into an actionable dashboard.

In Ep 50 with Jurgen Faiist, I discussed why we need a new data visualization language. In today's episode, we return to debate that topic with my good Suisse buddy Raphael Branger who is a Principal Data & Analytics Consultant at IT-Logix in Switzerland. Raphael is an IBCS Certified Consultant (he introduced me to it) as well as a Certified Disciplined Agile Practitioner with more than 18 years of experience in business intelligence and data warehousing. I met Raphael almost a decade when he invited to keynote their epic BI event in Zurich. As one of the most passionate people I've met around requirements gathering 'engineering' is he calls it, his feedback was instrumental to the ongoing enhancement of our BI Dashboard Formula methodology!

In today's episode, Raphael gives examples of why a new data viz language is needed and explains the International Business Communication Standards (IBCS) SUCCESS poster and how those standards can help. So much knowledge bombs in this one! Be sure to tune in!

 [03:01] The pros and cons of whether a new data visualization language is needed [03:15]  - User Expectations: Real-world experiences using the IBCS standards [24:45]  - How to ease communication between consumer and creator For full show notes, and the links mentioned visit: https://bibrainz.com/podcast/63

Enjoyed the Show?  Please leave us a review on iTunes. Free Data Storytelling Training Register before it sells out again! Our BI Data Storytelling Mastery Accelerator 3-Day Live Workshop new dates are finally available. Many BI teams are still struggling to deliver consistent, high-engaging analytics their users love. At the end of the workshop, you'll leave with a clear BI delivery action plan. Register today!

Advanced R 4 Data Programming and the Cloud: Using PostgreSQL, AWS, and Shiny

Program for data analysis using R and learn practical skills to make your work more efficient. This revised book explores how to automate running code and the creation of reports to share your results, as well as writing functions and packages. It includes key R 4 features such as a new color palette for charts, an enhanced reference counting system, and normalization of matrix and array types where matrix objects now formally inherit from the array class, eliminating inconsistencies. Advanced R 4 Data Programming and the Cloud is not designed to teach advanced R programming nor to teach the theory behind statistical procedures. Rather, it is designed to be a practical guide moving beyond merely using R; it shows you how to program in R to automate tasks. This book will teach you how to manipulate data in modern R structures and includes connecting R to databases such as PostgreSQL, cloud services such as Amazon Web Services (AWS), and digital dashboards such as Shiny. Each chapter also includes a detailed bibliography with references to research articles and other resources that cover relevant conceptual and theoretical topics. What You Will Learn Write and document R functions using R 4 Make an R package and share it via GitHub or privately Add tests to R code to ensure it works as intended Use R to talk directly to databases and do complex data management Run R in the Amazon cloud Deploy a Shiny digital dashboard Generate presentation-ready tables and reports using R Who This Book Is For Working professionals, researchers, and students who are familiar with R and basic statistical techniques such as linear regression and who want to learn how to take their R coding and programming to the next level.

Learn Grafana 7.0

"Learn Grafana 7.0" is the ultimate beginner's guide to leveraging Grafana's capabilities for analytics and interactive dashboards. You'll master real-time data monitoring, visualization, and learn how to query and explore metrics with a hands-on approach to Grafana 7.0's new features. What this Book will help me do Learn to install and configure Grafana from scratch, preparing you for real-world data analysis tasks. Navigate and utilize the Graph panel in Grafana effectively, ensuring clear and actionable visual insights. Incorporate advanced dashboard features such as annotations, templates, and links to enhance data monitoring. Integrate Grafana with major cloud providers like AWS and Azure for robust monitoring solutions. Implement secure user authentication and fine-tuned permissions for managing teams and sharing insights safely. Author(s) None Salituro, the author of "Learn Grafana 7.0," is an experienced data visualization expert with years of experience in software development and analytics. Salituro focuses on creating understandable and accessible resources for developers and analysts of all skill levels, bringing a hands-on practical approach to technical learning. Who is it for? This book is perfect for data analysts, business intelligence developers, and administrators looking to build skills in data visualization and monitoring with Grafana 7.0. If you're eager to create interactive dashboards and learn practical applications of Grafana's features, this book is for you. Beginners to Grafana are fully accommodated, though familiarity with data visualization principles is beneficial. For those seeking to monitor cloud services like AWS with Grafana, this book is indispensable.

This audio blog is about how the CHOP’s data and analytics (DnA) team uses near real-time data and information to decide how to marshal its resources to contain the pandemic. The culmination of all of this work has been an enterprise COVID-19 dashboard that is distributed to enterprise leadership daily. Originally published at: https://www.eckerson.com/articles/chop-harnesses-the-power-of-data-analytics-to-address-the-covid-19-pandemic

Send us a text Want to be featured as a guest on Making Data Simple? Reach out to us at [[email protected]] and tell us why you should be next.  Abstract Currently, COVID-19 is disrupting the world. In an effort to better provide updated information on the status of the pandemic, IBM and The Weather Channel have created a COVID-19 dashboard. Bill Higgins, IBM Distinguished Engineer, and Daniel Benoit, Program Director of Information Governance have come on the podcast this week to discuss this new initiative. Together, with host Al Martin, they discuss the purpose of this project, their current findings, and how they personally have been impacted.  Check out the dashboard here.  Connect with Bill LinkedIn Connect with Daniel LinkedIn Show Notes 10:16 - Get up to speed on Natural Language Processing here. 14:47 - Not sure what a data lake is? Find out here.   21:49 - Learn more on why extensibility is important to your API's here.  Connect with the Team Producer Liam Seston - LinkedIn. Producer Lana Cosic - LinkedIn. Producer Meighann Helene - LinkedIn.  Producer Kate Brown - LinkedIn. Producer Allison Proctor - LinkedIn. Producer Mar Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

JMP Essentials, 3rd Edition

Grasp the essentials of JMP to generate rapid results. JMP Essentials: An Illustrated Guide for New Users, Third Edition, is designed for new or novice JMP users who need to generate meaningful analysis quickly. The book focuses on the most commonly used platforms and typical workflow of the user, from data importing, exploring, and visualizing to modeling and sharing results with others. Throughout the book, the authors emphasize results over theory, providing just the essential steps with corresponding screenshots. In most cases, each section completes a JMP task, which maximizes the book’s utility as a reference. This edition has new instructions and screenshots reflecting the features added to the latest release of JMP software, including updated sections on JMP Dashboard Builder, Query Builder, the Fit Model platform, JMP Public and JMP Live, and a more detailed look at the JMP website. Each chapter contains a family of features that are carefully crafted to first introduce you to basic features and then move on to more advanced topics. JMP Essentials: An Illustrated Guide for New Users, Third Edition, is the quickest and most accessible reference book available.

Practical Highcharts with Angular: Your Essential Guide to Creating Real-time Dashboards

Learn to create stunning animated and interactive charts using Highcharts and Angular. Use and build on your existing knowledge of HTML, CSS, and JavaScript to develop impressive dashboards that will work in all modern browsers. You will learn how to use Highcharts, call backend services for data, and easily construct real-time data dashboards. You'll also learn how you can club your code with jQuery and Angular. This book provides the best solutions for real-time challenges and covers a wide range of charts including line, area, maps, plot, different types of pie chart, Gauge, heat map, Histogram, stacked bar, scatter plot and 3d charts. After reading this book, you'll be able to export your charts in different formats for project-based learning. Highcharts is one the most useful products worldwide for develop charting on the web, and Angular is well known for speed. Using Highcharts with Angular, developers can build fast, interactive dashboards. Get up to speed using this book today. What You’ll Learn How to develop interactive, animated dashboards How you can implement Highcharts using Angular How to develop a real-time application with the use of WebAPI, Angular, and Highcharts How to create interactive styling themes and colors for a dashboard Who This Book Is For This book is aimed at developers, dev leads, software architects, students or enthusiasts who are already familiar with HTML, CSS, and JavaScript.

podcast_episode
by Mico Yuk (Data Storytelling Academy) , Mustafa Mustafa (Ferrara Candy Company)

BI tools change by the minute, so have you ever considered outsourcing your data visualization needs in the future? Maybe you should, especially if you don't have proper in-house skill sets. Don't risk your reputation because users can't unsee a bad data visualization.

Today's guest is long-term BI Brainz customer Mustafa Mustafa, senior director of IT at Ferrara Candy Company. Mustafa transformed Ferrara Candy into a forward-thinking and innovative company. He discusses the pros and cons of outsourcing data visualization by choosing the right partners. In this episode, you'll learn: [04:13] Key Quote: Users cannot unsee a bad data visualization. - Mico Yuk [11:20] Mustafa's background in learning Mico's BI Framework and dashboard strategies. [20:45] Should data visualization be outsourced? Consider customer cases and challenges, such as communication, common sense, and strategy to share information. For full show notes, and the links mentioned visit: https://bibrainz.com/podcast/40 Sponsor This exciting season of AOF is sponsored by our BI Data Storytelling Mastery Accelerator 3-Day Live workshop. Our second one is coming up on Jan 28-30 and registration is open! Join us and consider upgrading to be a VIP (we have tons of bonuses planned). Many BI teams are still struggling to deliver consistent, high-engaging analytics their users love. At the end of three days, you'll leave with the tools, techniques, and resources you need to engage your users. Register today!   Enjoyed the Show?  Please leave us a review on iTunes.