talk-data.com talk-data.com

Topic

BI

Business Intelligence (BI)

data_visualization reporting analytics

1211

tagged

Activity Trend

111 peak/qtr
2020-Q1 2026-Q1

Activities

1211 activities · Newest first

Teach Yourself VISUALLY Power BI

A comprehensive and fully visual guide to Microsoft Power BI Teach Yourself VISUALLY Power BI collects all the resources you need to master the everyday use of Microsoft's powerful data visualization software and delivers them in a single, easy-to-use volume. Fully illustrated, step-by-step instructions are combined with crystal-clear screenshots that walk you through the basic and advanced functions of Microsoft Power BI. Teach Yourself VISUALLY Power BI offers the best visual learning techniques with complete source material about the interface and substance of Power BI, as well as: Stepwise guidance on working with, transforming, and processing data sources Instructions for customizing data visualizations to create informative and presentation-ready charts and graphs Full-color, two-page tutorials on the more advanced features of Power BI, including app integrations and data access with DAX The fastest, easiest way for visual learners to get a handle on Microsoft Power BI, Teach Yourself VISUALLY Power BI is a can't-miss resource, loaded with useful tips for newbies and experts alike.

Transitioning to Microsoft Power Platform: An Excel User Guide to Building Integrated Cloud Applications in Power BI, Power Apps, and Power Automate

Welcome to this step-by-step guide for Excel users, data analysts, and finance specialists. It is designed to take you through practical report and development scenarios, including both the approach and the technical challenges. This book will equip you with an understanding of the overall Power Platform use case for addressing common business challenges. While Power BI continues to be an excellent tool of choice in the BI space, Power Platform is the real game changer. Using an integrated architecture, a small team of citizen developers can build solutions for all kinds of business problems. For small businesses, Power Platform can be used to build bespoke CRM, Finance, and Warehouse management tools. For large businesses, it can be used to build an integration point for existing systems to simplify reporting, operation, and approval processes. The author has drawn on his15 years of hands-on analytics experience to help you pivot from the traditional Excel-based reporting environment. By using different business scenarios, this book provides you with clear reasons why a skill is important before you start to dive into the scenarios. You will use a fast prototyping approach to continue to build exciting reporting, automation, and application solutions and improve them while you acquire new skill sets. The book helps you get started quickly with Power BI. It covers data visualization, collaboration, and governance practices. You will learn about the most practical SQL challenges. And you will learn how to build applications in PowerApps and Power Automate. The book ends with an integrated solution framework that can be adapted to solve a wide range of complex business problems. What You Will Learn Develop reporting solutions and business applications Understand the Power Platform licensing and development environment Apply Data ETL and modeling in Power BI Use Data Storytelling and dashboard design to better visualize data Carry out data operations with SQL and SharePoint lists Develop useful applications using Power Apps Develop automated workflows using Power Automate Integrate solutions with Power BI, Power Apps, and Power Automate to build enterprise solutions Who This Book Is For Next-generation data specialists, including Excel-based users who want to learn Power BI and build internal apps; finance specialists who want to take a different approach to traditional accounting reports; and anyone who wants to enhance their skill set for the future job market.

Exam Ref PL-900 Microsoft Power Platform Fundamentals, 2nd Edition

Prepare for Microsoft Exam PL-900. Demonstrate your real-world knowledge of the fundamentals of Microsoft Power Platform, including its business value, core components, and the capabilities and advantages of Power BI, Power Apps, Power Automate, and Power Virtual Agents. Designed for business users, functional consultants, and other professionals, this Exam Ref focuses on the critical thinking and decision-making acumen needed for success at the Microsoft Certified: Power Platform Fundamentals level. Focus on the expertise measured by these objectives: Describe the business value of Power Platform Identify the Core Components of Power Platform Demonstrate the capabilities of Power BI Demonstrate the capabilities of Power Apps Demonstrate the capabilities of Power Automate Demonstrate the capabilities of Power Virtual Agents This Microsoft Exam Ref: Organizes its coverage by exam objectives Features strategic, what-if scenarios to challenge you Assumes you are a business user, functional consultant, or other professional who wants to improve productivity by automating business processes, analyzing data, creating simple app experiences, or developing business enhancements to Microsoft cloud solutions. About the Exam Exam PL-900 focuses on knowledge needed to describe the value of Power Platform services and of extending solutions; describe Power Platform administration and security; describe Common Data Service, Connectors, and AI Builder; identify common Power BI components; connect to and consume data; build basic dashboards with Power BI; identify common Power Apps components; build basic canvas and model-driven apps; describe Power Apps portals; identify common Power Automate components; build basic flows; describe Power Virtual Agents capabilities; and build and publish basic chatbots. About Microsoft Certification Passing this exam fulfills your requirements for the Microsoft Certified: Power Platform Fundamentals certification, demonstrating your understanding of Power Platforms core capabilitiesfrom business value and core product capabilities to building simple apps, connecting data sources, automating basic business processes, creating dashboards, and creating chatbots. With this certification, you can move on to earn specialist certifications covering more advanced aspects of Power Apps and Power BI, including Microsoft Certified: Power Platform App Maker Associate and Power Platform Data Analyst Associate. See full details at: microsoft.com/learn

Expert Data Modeling with Power BI - Second Edition

Expert Data Modeling with Power BI, Second Edition, serves as your comprehensive guide to mastering data modeling using Power BI. With clear explanations, actionable examples, and a focus on hands-on learning, this book takes you through the concepts and advanced techniques that will enable you to build high-performing data models tailored to real-world requirements. What this Book will help me do Master time intelligence and virtual tables in DAX to enhance your data models. Understand best practices for creating efficient Star Schemas and preparing data in Power Query. Deploy advanced modeling techniques such as calculation groups, aggregations, and incremental refresh. Manage complex data models and streamline them to improve performance. Leverage data marts and data flows within Power BI for modularity and scalability. Author(s) Soheil Bakhshi is a seasoned expert in data visualization and analytics with extensive experience in leveraging Power BI for business intelligence solutions. Passionate about educating others, he combines practical insights and technical knowledge to make learning accessible and effective. His approachable writing style reflects his commitment to helping readers succeed. Who is it for? This book is ideal for business intelligence professionals, data analysts, or report developers with basic knowledge of Power BI and experience with Star Schema concepts. Whether you're looking to refine your data modeling skills or expand your expertise in advanced features, this guide aims to help you achieve your goals efficiently.

The name WALD-stack stems from the four technologies it is composed of, i.e. a cloud-computing Warehouse like Snowflake or Google BigQuery, the open-source data integration engine Airbyte, the open-source full-stack BI platform Lightdash, and the open-source data transformation tool DBT.

Using a Formula 1 Grand Prix dataset, I will give an overview of how these four tools complement each other perfectly for analytics tasks in an ELT approach. You will learn the specific uses of each tool as well as their particular features. My talk is based on a full tutorial, which you can find under waldstack.org.

Summary

Business intellingence has been chasing the promise of self-serve data for decades. As the capabilities of these systems has improved and become more accessible, the target of what self-serve means changes. With the availability of AI powered by large language models combined with the evolution of semantic layers, the team at Zenlytic have taken aim at this problem again. In this episode Paul Blankley and Ryan Janssen explore the power of natural language driven data exploration combined with semantic modeling that enables an intuitive way for everyone in the business to access the data that they need to succeed in their work.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their extensive library of integrations enable you to automatically send data to hundreds of downstream tools. Sign up free at dataengineeringpodcast.com/rudderstack Your host is Tobias Macey and today I'm interviewing Paul Blankley and Ryan Janssen about Zenlytic, a no-code business intelligence tool focused on emerging commerce brands

Interview

Introduction How did you get involved in the area of data management? Can you describe what Zenlytic is and the story behind it? Business intelligence is a crowded market. What was your process for defining the problem you are focused on solving and the method to achieve that outcome? Self-serve data exploration has been attempted in myriad ways over successive generations of BI and data platforms. What are the barriers that have been the most challenging to overcome in that effort?

What are the elements that are coming together now that give you confidence in being able to deliver on that?

Can you describe how Zenlytic is implemented?

What are the evolutions in the understanding and implementation of semantic layers that provide a sufficient substrate for operating on? How have the recent breakthroughs in large language models (LLMs) improved your ability to build features in Zenlytic? What is your process for adding domain semantics to the operational aspect of your LLM?

For someone using Zenlytic, what is the process for getting it set up and integrated with their data? Once it is operational, can you describe some typical workflows for using Zenlytic in a business context?

Who are the target users? What are the collaboration options available?

What are the most complex engineering/data challenges that you have had to address in building Zenlytic? What are the most interesting, innovative, or unexpected ways that you have seen Zenlytic used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Zenlytic? When is Zenlytic the wrong choice? What do you have planned for the future of Zenlytic?

Contact Info

Paul Blankley (LinkedIn)

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Zenlytic OLAP Cube Large Language Model Starburst Pr

Ryan Dolley and I chat about why BI needs to evolve, moving beyond dashboards, the impact of generative AI on analytics, SuperDataBros, and more.

data #analytics #businessintelligence #datascience


If you like this show, give it a 5-star rating on your favorite podcast platform.

Purchase Fundamentals of Data Engineering at your favorite bookseller.

Check out my substack: https://joereis.substack.com/

podcast_episode
by Vijay Yadav (Center for Mathematical Sciences at Merck) , Vanessa Gonzalez (Transamerica)

In 2023, businesses are relying more heavily on data science and analytics teams than ever before. However, simply having a team of talented individuals is not enough to guarantee success.  In the last of our RADAR 2023 sessions, Vijay Yadav and Vanessa Gonzalez will outline the keys to building high-impact data teams in 2023. They will discuss what are the hallmarks of a high-performing data team, the importance of diversity of background and skillset needed to build impactful data teams, setting up career pathways for data scientists, and more. Vijay Yadav is a highly respected data and analytics thought leader with over 20 years of experience in data product development, data engineering, and advanced analytics. As Director of Quantitative Sciences - Digital, Data, and Analytics at Merck, he leads data & analytics teams in creating AI/ML-driven data products to drive digital transformation. Vijay has held numerous leadership positions at various companies and is known for his ability to lead global teams to achieve high-impact results.  Vanessa Gonzalez is the Sr. Director of Data Science and Innovation at Businessolver where she leads the Computational Linguistics, Machine Learning Engineering, Data Science, BI Analytics, and BI Engineering teams. She is experienced in leading data transformations, performing analytical and management functions that contribute to the goals and growth objectives of organizations and divisions.  Listen in as Vanessa and Vijay share how to enable data teams to flourish in an ever-evolving data landscape. 

An effective data strategy is one that combines a variety of levers such as infrastructure, tools, organization, processes, and more. Arguably however, the most important aspect of a vibrant data strategy is culture and people. In the third of our four RADAR 2023 sessions, Cindi Howson and Valerie Logan discuss how data leaders can create a data strategy that puts their people at the center. Learn key insights into how to drive effective change management for data culture, how to drive adoption of data within the organization, common pitfalls when executing on a data strategy, and more.  Cindi Howson is the Chief Data Strategy Officer at ThoughtSpot and host of The Data Chief podcast. Cindi is an analytics and BI thought leader and expert with a flair for bridging business needs with technology.  As Chief Data Strategy Officer at ThoughtSpot, she advises top clients on data strategy and best practices to become data-driven, speaks internationally on top trends such as AI ethics, and influences ThoughtSpot’s product strategy.  Valerie Logan is the Founder and CEO of The Data Lodge. Valerie is committed to data literacy, she believes that in today's digital society, data literacy is a life skill. With advisory services, bootcamps, a resource library and community services at The Data Lodge, Valerie is certifying the world’s first Data Literacy Program Leads and pioneering the path forward in cracking the data culture code. In 2018, she was awarded Gartner’s Top Thought Leadership Award for her leadership in the area of Data Literacy. Listen in as Cindi and Valerie share how to build a data strategy that puts people first in an enterprise organization.

IBM Storage DS8900F Product Guide Release 9.3.2

This IBM® Redbooks Product Guide provides an overview of the features and functions that are available with the IBM Storage DS8900F models that run microcode Release 9.3.2 (Bundle 89.32/Licensed Machine Code 7.9.32). As of February 2023, the DS8900F with DS8000 Release 9.3.2 is the latest addition. The DS8900F is an all-flash system exclusively, and it offers three classes: IBM DS8980F: Analytic Class: The DS8980F Analytic Class offers best performance for organizations that want to expand their workload possibilities to artificial intelligence (AI), Business Intelligence, and Machine Learning. IBM DS8950F: Agility Class: The agility class is efficiently designed to consolidate all your mission-critical workloads for IBM zSystems, IBM LinuxONE, IBM Power Systems, and distributed environments under a single all-flash storage solution. IBM DS8910F: Flexibility Class: The flexibility class delivers significant performance for midrange organizations that are looking to meet storage challenges with advanced functionality delivered as a single rack solution.

On today’s episode, we’re joined by Atif Ghauri, Senior Vice President at Cyderes, a global cybersecurity powerhouse offering comprehensive solutions around managed security, identity and access management, and professional services.

We talk about:

  • How Cyderes works and the problems they solve.
  • The evolution of cloud security.
  • The impact of AI on cybersecurity.
  • The biggest risk factors in cloud security today.
  • How new SaaS founders today should think about cybersecurity and common mistakes to avoid.
  • The turning point where SaaS companies have to start taking security more seriously.
  • Some of the things Atif has found surprising in his security career.

Atif Ghauri - https://www.linkedin.com/in/aghauri Cyderes - https://www.linkedin.com/company/the-herjavec-group/

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas  #analytics #AWS  #BI

Welcome to today's Data Warehouse vs. Lakehouse podcast for Data leaders and executives. In this episode, we will discuss the critical differences between these two approaches to data management and which one might be best suited for your organization. First, let's define what we mean by Data Warehouse and Lakehouse. A Data Warehouse is a centralized data repository optimized for querying and analysis. It is typically built using a structured, relational database. It supports business intelligence (BI) and analytics use cases. A Lakehouse, on the other hand, is a newer concept that combines the scalability and flexibility of a data lake with the structure and governance of a data warehouse. It supports BI and advanced analytics use cases like machine learning and AI.

Microsoft Power BI Data Analyst Certification Companion: Preparation for Exam PL-300

Use this book to study for the PL-300 Microsoft Power BI Data Analyst exam. The book follows the “Skills Measured” outline provided by Microsoft to help focus your study. Each topic area from the outline corresponds to an area covered by the exam, and the book helps you build a good base of knowledge in each area. Each topic is presented with a blend of practical explanations, theory, and best practices. Power BI is more than just the Power BI Desktop or the Power BI Service. It is two distinct applications and an online service that, together, enable business users to gather, shape, and analyze data to generate and present insights. This book clearly delineates the purpose of each component and explains the key concepts necessary to use each component effectively. Each chapter provides best practices and tips to help an inexperienced Power BI practitioner develop good habits that will support larger or more complex analyses. Manybusiness analysts come to Power BI with a wealth of experience in Excel and particularly with pivot tables. Some of this experience translates readily into Power BI concepts. This book leverages that overlap in skill sets to help seasoned Excel users overcome the initial learning curve in Power BI, but no prior knowledge of any kind is assumed, terminology is defined in non-technical language, and key concepts are explained using analogies and ideas from experiences common to any reader. After reading this book, you will have the background and capability to learn the skills and concepts necessary both to pass the PL-300 exam and become a confident Power BI practitioner. What You Will Learn Create user-friendly, responsive reports with drill-throughs, bookmarks, and tool tips Construct a star schema with relationships, ensuring that your analysis will be both accurate and responsive Publish reports and datasets to the Power BI Service, enabling the report (and the dataset) to be viewed and used by your colleagues Extract data from a variety of sources, enabling you to leverage the data that your organization has collected and stored in a variety of sources Schedule data refreshes for published datasets so your reports and dashboards stay up to date Develop dashboards with visuals from different reports and streaming content Who This Book Is For Power BI users who are planning to take the PL-300 exam, Power BI users who want help studying the topic areas listed in Microsoft’s outline for the PL-300 exam, and those who are not planning to take the exam but want to close any knowledge gaps they might have

In today’s episode, we’re joined by Reha Jhunjhunwala, Product Manager of AI ML Initiatives at eClinical Solutions, a company that helps life sciences organizations around the world accelerate clinical development initiatives with expert data services.

We talk about:

  • Reha’s background as a dentist and how she got into tech.
  • How machine learning and AI impact the software development process.
  • How AI will affect the traditional strengths of software in general.
  • What the considerations are around AI in healthcare where regulations are strict.
  • Some of the things slowing AI down.

Reha Jhunjhunwala - https://www.linkedin.com/in/rehajhunjhunwala/ eClinical Solutions - https://www.linkedin.com/company/eclinical-solutions/

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas #analytics #AWS #BI

On today’s episode, we’re joined by Vlad Eidelman. Vlad is CTO and Chief Scientist at FiscalNote — a leading technology provider of global policy and market intelligence uniquely combining AI technology, actionable data, and expert and peer insights to give customers mission-critical insights.

We talk about:

  • Vlad’s story and what FiscalNote does.
  • How AI changes software.
  • The importance of adding extra value to software.
  • What to do with user data?
  • How Vlad makes internal decisions at FiscalNote.
  • The impact of remote work.
  • The importance of building the right data analytics stack to acquire data.

Vlad Eidelman - https://www.linkedin.com/in/veidelman/ FiscalNote - https://www.linkedin.com/company/fiscalnote/

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas #analytics #AWS #BI

Convidamos o Grupo Boticário para contar como está aplicando na prática modelos de Data Science no varejo.  O time de Data Science compartilhou com a gente alguns cases de modelos de recomendação para o ecommerce, como trabalham com os vendedores trazendo ferramentas para que eles vendam mais.

Os projetos que trouxeram para demonstrar um pouco de como trabalham com dados foram: o BotiEssence, que faz sugestão de produtos pro consumidor final; o BotiColorista, que faz classificação de tendências de cores; e o projeto Lyra iniciativa para tornar os processos do time de performance de produto mais data-driven. Consiste em entregas de soluções de Data Science e BI para áreas como Estabilidade e Segurança de Formulações, Ecotoxicidade, Ciências do Consumidor, Tecnologia de Materiais e Cosmetovigilância.

Os resultados do State of Data Brasil já está no ar! Trata-se do maior mapeamento do mercado de dados no Brasil. Para fazer o download do relatório clique aqui.

Conheça nossos convidados Linkedin do Carlos Fonseca Linkedin da Giuliana de Jong Linkedin do Raphael Corrêa

In today’s episode, we’re joined by Gleb Polyakov. Gleb is the CEO and Co-Founder of Nylas, a platform that allows developers to automate manual, repetitive everyday tasks with little to no code.

We talk about:

  • How Nylas works, the benefits it provides and who it targets.
  • The definition of first-party data and why it’s important.
  • The growth of the API economy.
  • The new roles of sales and marketing when selling to developers.
  • The trend of using education as a sales technique.

Gleb Polyakov - https://www.linkedin.com/in/gpolyakov Nylas - https://www.linkedin.com/company/nylas/

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas #analytics #AWS #BI

Summary

This podcast started almost exactly six years ago, and the technology landscape was much different than it is now. In that time there have been a number of generational shifts in how data engineering is done. In this episode I reflect on some of the major themes and take a brief look forward at some of the upcoming changes.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Your host is Tobias Macey and today I'm reflecting on the major trends in data engineering over the past 6 years

Interview

Introduction 6 years of running the Data Engineering Podcast Around the first time that data engineering was discussed as a role

Followed on from hype about "data science"

Hadoop era Streaming Lambda and Kappa architectures

Not really referenced anymore

"Big Data" era of capture everything has shifted to focusing on data that presents value

Regulatory environment increases risk, better tools introduce more capability to understand what data is useful

Data catalogs

Amundsen and Alation

Orchestration engine

Oozie, etc. -> Airflow and Luigi -> Dagster, Prefect, Lyft, etc. Orchestration is now a part of most vertical tools

Cloud data warehouses Data lakes DataOps and MLOps Data quality to data observability Metadata for everything

Data catalog -> data discovery -> active metadata

Business intelligence

Read only reports to metric/semantic layers Embedded analytics and data APIs

Rise of ELT

dbt Corresponding introduction of reverse ETL

What are the most interesting, unexpected, or challenging lessons that you have learned while working on running the podcast? What do you have planned for the future of the podcast?

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Sponsored By: Materialize: Materialize

Looking for the simplest way to get the freshest data possible to your teams? Because let's face it: if real-time were easy, everyone would be using it. Look no further than Materialize, the streaming database you already know how to use.

Materialize’s PostgreSQL-compatible interface lets users leverage the tools they already use, with unsurpassed simplicity enabled by full ANSI SQL support. Delivered as a single platform with the separation of storage and compute, strict-serializability, active replication, horizontal scalability and workload isolation — Materialize is now the fastest way to build products with streaming data, drastically reducing the time, expertise, cost and maintenance traditionally associated with implementation of real-time features.

Sign up now for early access to Materialize and get started with the power of streaming data with the same simplicity and low implementation cost as batch cloud data warehouses.

Go to materialize.comSupport Data Engineering Podcast

On today’s episode, we’re joined by Ellie Fields. Ellie is the Chief Product and Engineering Officer at Salesloft, which helps sales teams drive more revenue with the only complete sales engagement platform available in the market. We talk about:

  • Ellie’s background and what Salesloft does.
  • The changing trends in how companies use data.
  • Drawing valuable insights from unstructured data.
  • Putting workflow at the center of what you do, and the challenges involved.
  • Ellie’s experiences managing both product and engineering.
  • Are more autonomous teams more scalable?
  • Applying a metric- and data-oriented culture internally.
  • The impact of remote work on how companies operate.

Ellie Fields - https://www.linkedin.com/in/elliefields/ Salesloft - https://www.linkedin.com/company/salesloft/

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas #analytics #AWS #BI

Summary

Business intelligence has gone through many generational shifts, but each generation has largely maintained the same workflow. Data analysts create reports that are used by the business to understand and direct the business, but the process is very labor and time intensive. The team at Omni have taken a new approach by automatically building models based on the queries that are executed. In this episode Chris Merrick shares how they manage integration and automation around the modeling layer and how it improves the organizational experience of business intelligence.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Truly leveraging and benefiting from streaming data is hard - the data stack is costly, difficult to use and still has limitations. Materialize breaks down those barriers with a true cloud-native streaming database - not simply a database that connects to streaming systems. With a PostgreSQL-compatible interface, you can now work with real-time data using ANSI SQL including the ability to perform multi-way complex joins, which support stream-to-stream, stream-to-table, table-to-table, and more, all in standard SQL. Go to dataengineeringpodcast.com/materialize today and sign up for early access to get started. If you like what you see and want to help make it better, they're hiring across all functions! Your host is Tobias Macey and today I'm interviewing Chris Merrick about the Omni Analytics platform and how they are adding automatic data modeling to your business intelligence

Interview

Introduction How did you get involved in the area of data management? Can you describe what Omni Analytics is and the story behind it?

What are the core goals that you are trying to achieve with building Omni?

Business intelligence has gone through many evolutions. What are the unique capabilities that Omni Analytics offers over other players in the market?

What are the technical and organizational anti-patterns that typically grow up around BI systems?

What are the elements that contribute to BI being such a difficult product to use effectively in an organization?

Can you describe how you have implemented the Omni platform?

How have the design/scope/goals of the product changed since you first started working on it?

What does the workflow for a team using Omni look like?

What are some of the developments in the broader ecosystem that have made your work possible?

What are some of the positive and negative inspirations that you have drawn from the experience that you and your team-mates have gained in previous businesses?

What are the most interesting, innovative, or unexpected ways that you have seen Omni used?

What are the most interesting, unexpected, or challenging lessons that you have learned while working on Omni?

When is Omni the wrong choice?

What do you have planned for the future of Omni?

Contact Info

LinkedIn @cmerrick on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Omni Analytics Stitch RJ Metrics Looker

Podcast Episode

Singer dbt

Podcast Episode

Teradata Fivetran Apache Arrow

Podcast Episode

DuckDB

Podcast Episode

BigQuery Snowflake

Podcast Episode

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Sponsored By: Materialize: Materialize

Looking for the simplest way to get the freshest data possible to your teams? Because let's face it: if real-time were easy, everyone would be using it. Look no further than Materialize, the streaming database you already know how to use.

Materialize’s PostgreSQL-compatible interface lets users leverage the tools they already use, with unsurpassed simplicity enabled by full ANSI SQL support. Delivered as a single platform with the separation of storage and compute, strict-serializability, active replication, horizontal scalability and workload isolation — Materialize is now the fastest way to build products with streaming data, drastically reducing the time, expertise, cost and maintenance traditionally associated with implementation of real-time features.

Sign up now for early access to Materialize and get started with the power of streaming data with the same simplicity and low implementation cost as batch cloud data warehouses.

Go to materialize.comSupport Data Engineering Podcast