talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

Summary A lot of the work that goes into data engineering is trying to make sense of the "data exhaust" from other applications and services. There is an undeniable amount of value and utility in that information, but it also introduces significant cost and time requirements. In this episode Nick King discusses how you can be intentional about data creation in your applications and services to reduce the friction and errors involved in building data products and ML applications. He also describes the considerations involved in bringing behavioral data into your systems, and the ways that he and the rest of the Snowplow team are working to make that an easy addition to your platforms.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit dataengineeringpodcast.com/prefect. Data engineers don’t enjoy writing, maintaining, and modifying ETL pipelines all day, every day. Especially once they realize 90% of all major data sources like Google Analytics, Salesforce, Adwords, Facebook, Spreadsheets, etc., are already available as plug-and-play connectors with reliable, intuitive SaaS solutions. Hevo Data is a highly reliable and intuitive data pipeline platform used by data engineers from 40+ countries to set up and run low-latency ELT pipelines with zero maintenance. Boasting more than 150 out-of-the-box connectors that can be set up in minutes, Hevo also allows you to monitor and control your pipelines. You get: real-time data flow visibility, fail-safe mechanisms, and alerts if anything breaks; preload transformations and auto-schema mapping precisely control how data lands in your destination; models and workflows to transform data for analytics; and reverse-ETL capability to move the transformed data back to your business software to inspire timely action. All of this, plus its transparent pricing and 24*7 live support, makes it consistently voted by users as the Leader in the Data Pipeline

Summary Despite the best efforts of data engineers, data is as messy as the real world. Entity resolution and fuzzy matching are powerful utilities for cleaning up data from disconnected sources, but it has typically required custom development and training machine learning models. Sonal Goyal created and open-sourced Zingg as a generalized tool for data mastering and entity resolution to reduce the effort involved in adopting those practices. In this episode she shares the story behind the project, the details of how it is implemented, and how you can use it for your own data projects.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their extensive library of integrations enable you to automatically send data to hundreds of downstream tools. Sign up free at dataengineeringpodcast.com/rudder Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. Your host is Tobias Macey and today I’m interviewing Sonal Goyal about Zingg, an open source entity resolution frame

podcast_episode
by Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

Mark and the team dissect October's employment report, the Fed's most recent rate hike, and what it all means for the prospects for a recession in the coming year.  Full episode transcript Follow Mark Zandi @MarkZandi, Cris deRitis @MiddleWayEcon, and Marisa DiNatale on LinkedIn for additional insight

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Abhi is a growth and data leader, and an excellent Twitter follow. Most recently, he was Head of Growth and Analytics at Flexport, where he helped the company to grow 10x over the past 3 years. Previously, Abhi led growth and data teams at Keap, Hustle, and Honeybook. In this conversation with Tristan and Julia, Abhi explains his methodology for setting up a new growth data organization, and how you might be falling victim to the dreaded "arbitrary uniqueness" bug. For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com.  The Analytics Engineering Podcast is sponsored by dbt Labs

In today’s episode, we’re joined by Daniëlle Keeven, VP of Finance at Paddle — the only complete payments infrastructure provider for SaaS companies.

We dive into all kinds of topics, including:

  • Daniëlle’s background and how she came to join Paddle.
  • Why finance is often an afterthought for founders.
  • Important steps founders need to take when they start making money.
  • How does the subscription model make things more complicated for software companies?
  • The impact of regulations on the SaaS space.
  • The evolution of software and operating systems, and what the future holds.
  • The future of self-sustaining software.

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas #analytics #AWS #BI

SQL Server 2022 Revealed: A Hybrid Data Platform Powered by Security, Performance, and Availability

Know how to use the new capabilities and cloud integrations in SQL Server 2022. This book covers the many innovative integrations with the Azure Cloud that make SQL Server 2022 the most cloud-connected edition ever. The book covers cutting-edge features such as the blockchain-based Ledger for creating a tamper-evident record of changes to data over time that you can rely on to be correct and reliable. You'll learn about built-in Query Intelligence capabilities to help you to upgrade with confidence that your applications will perform at least as fast after the upgrade than before. In fact, you'll probably see an increase in performance from the upgrade, with no code changes needed. Also covered are innovations such as contained availability groups and data virtualization with S3 object storage. New cloud integrations covered in this book include Microsoft Azure Purview and the use of Azure SQL for high availability and disaster recovery. The bookcovers Azure Synapse Link with its built-in capabilities to take changes and put them into Synapse automatically. Anyone building their career around SQL Server will want this book for the valuable information it provides on building SQL skills from edge to the cloud. ​ What You Will Learn Know how to use all of the new capabilities and cloud integrations in SQL Server 2022 Connect to Azure for disaster recovery, near real-time analytics, and security Leverage the Ledger to create a tamper-evident record of data changes over time Upgrade from prior releases and achieve faster and more consistent performance with no code changes Access data and storage in different and new formats, such as Parquet and S3, without moving the data and using your existing T-SQL skills Explore new application scenarios using innovations with T-SQL in areassuch as JSON and time series Who This Book Is For SQL Server professionals who want to upgrade their skills to the latest edition of SQL Server; those wishing to take advantage of new integrations with Microsoft Azure Purview (governance), Azure Synapse (analytics), and Azure SQL (HA and DR); and those in need of the increased performance and security offered by Query Intelligence and the new Ledger

dbt Project Evaluator

Since the dawn of time (or at least the last few years), the proserv team has been “dbt_project_evaluator”s. They've written articles, given talks, created training courses, and personally delivered just a truly obscene amount of audits. Up until now, evaluating your own dbt project—even with every aforementioned resource—would be incredibly time consuming. To quote dbt Lab’s SQL style guide, “brain time is expensive.” Enter: dbt_project_evaluator. In this talk, Grace Goheen (dbt Labs) which share how this package enables analytics engineers to follow dbt Labs own best practices by automatically curating a list of improvements, in the dbt-language of “models” and “tests” that they already know and love. By decreasing the “discovery” period, analytics engineers can use their brain time to work on actually implementing the recommended changes.

Check the slides here: https://docs.google.com/presentation/d/1U7CaoSceXumbzlPGqqAQukaz1YdOgT46sb-M6x_LNCw/edit?usp=sharing

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

podcast_episode
by Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery) , Jay Feng (Interview Query)

So, you finally took that recruiter's call, and then you made it through the initial phone screen. You weren't really expecting that to happen, but now you're facing an actual interview! It sounds intense and, yet, you're not sure what to expect or how to prepare for it. Flash cards with statistical concepts? A crash course in Python? LinkedIn stalking of current employees of the company? Maybe. We asked Jay Feng from Interview Query to join us to discuss strategies and tactics for data scientists and analyst interviews, and we definitely wanted to hire him by the time we were done! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Python has dominated data science programming for the last few years, but there’s another rising star programming language seeing increased adoption and popularity—Julia.

As the fourth most popular programming language, many data teams and practitioners are turning their attention toward understanding Julia and seeing how it could benefit individual careers, business operations, and drive increased value across organizations.

Zacharias Voulgaris, PhD joins the show to talk about his experience with the Julia programming language and his perspective on the future of Julia’s widespread adoption. Zacharias is the author of Julia for Data Science. As a Data Science consultant and mentor with 10 years of international experience that includes the role of Chief Science Officer at three startups, Zacharias is an expert in data science, analytics, artificial intelligence, and information systems.

In this episode, we discuss the strengths of Julia, how data scientists can get started using Julia, how team members and leaders alike can transition to Julia, why companies are secretive about adopting Julia, the interoperability of Julia with Python and other popular programming languages, and much more.

Check out this month’s events: https://www.datacamp.com/data-driven-organizations-2022

Take the Introduction to Julia course for free!

https://www.datacamp.com/courses/introduction-to-julia

Summary Business intelligence has grown beyond its initial manifestation as dashboards and reports. In its current incarnation it has become a ubiquitous need for analytics and opportunities to answer questions with data. In this episode Amir Orad discusses the Sisense platform and how it facilitates the embedding of analytics and data insights in every aspect of organizational and end-user experiences.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit dataengineeringpodcast.com/prefect. Data engineers don’t enjoy writing, maintaining, and modifying ETL pipelines all day, every day. Especially once they realize 90% of all major data sources like Google Analytics, Salesforce, Adwords, Facebook, Spreadsheets, etc., are already available as plug-and-play connectors with reliable, intuitive SaaS solutions. Hevo Data is a highly reliable and intuitive data pipeline platform used by data engineers from 40+ countries to set up and run low-latency ELT pipelines with zero maintenance. Boasting more than 150 out-of-the-box connectors that can be set up in minutes, Hevo also allows you to monitor and control your pipelines. You get: real-time data flow visibility, fail-safe mechanisms, and alerts if anything breaks; preload transformations and auto-schema mapping precisely control how data lands in your destination; models and workflows to transform data for analytics; and reverse-ETL capability to move the transformed data back to your business software to inspire timely action. All of this, plus its transparent pricing and 24*7 live support, makes it consistently voted by users as the Leader in the Data Pipeline category on review platforms like G2. Go to dataengineeringpodcast.com/hevodata and sign up for a free 14-day trial that also comes with 24×7 support. Your host is Tobias Macey and today I’m interviewing Amir Orad about Sisense, a platform focused on providing intelligent analytics

Summary One of the most impactful technologies for data analytics in recent years has been dbt. It’s hard to have a conversation about data engineering or analysis without mentioning it. Despite its widespread adoption there are still rough edges in its workflow that cause friction for data analysts. To help simplify the adoption and management of dbt projects Nandam Karthik helped create Optimus. In this episode he shares his experiences working with organizations to adopt analytics engineering patterns and the ways that Optimus and dbt were combined to let data analysts deliver insights without the roadblocks of complex pipeline management.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their extensive library of integrations enable you to automatically send data to hundreds of downstream tools. Sign up free at dataengineeringpodcast.com/rudder Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. Your host is Tobias Macey and today I’m interviewing Nand

podcast_episode
by Cris deRitis , Mark Obrinsky (National Multifamily Housing Council) , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

Mark Obrinsky, Chief Economist for the National Multifamily Housing Council, joins the podcast and gives a detailed housing outlook. Topics include rent growth, housing shortage, and the impact of inflation on the housing market. Mark and Cris also welcome Marisa DiNatale as the new co-host of Inside Economics. Full episode transcript Follow Mark Zandi @MarkZandi, Cris deRitis @MiddleWayEcon, and Marisa DiNatale on LinkedIn for additional insight  

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Send us a text Datatopics is a podcast presented by Kevin Missoorten to talk about the fuzzy and misunderstood concepts in the world of data, analytics and AI and get to the bottom of things. In this first episode of a mini-series on AI Ethics, we try to pin down why ethics is such a HOT topic in AI and what is required to make an AI solution "Ethical".

These and many more questions are tackled in this episode with our very own Virginie & Tim and super guests Rob Heyman and Jan de Bruyne!

Datatopics is brought to you by Dataroots Music: The Gentlemen - DivKidThe thumbnail is generated by Midjourney

R 4 Data Science Quick Reference: A Pocket Guide to APIs, Libraries, and Packages

In this handy, quick reference book you'll be introduced to several R data science packages, with examples of how to use each of them. All concepts will be covered concisely, with many illustrative examples using the following APIs: readr, dibble, forecasts, lubridate, stringr, tidyr, magnittr, dplyr, purrr, ggplot2, modelr, and more. With R 4 Data Science Quick Reference, you'll have the code, APIs, and insights to write data science-based applications in the R programming language. You'll also be able to carry out data analysis. All source code used in the book is freely available on GitHub.. What You'll Learn Implement applicable R 4 programming language specification features Import data with readr Work with categories using forcats, time and dates with lubridate, and strings with stringr Format data using tidyr and then transform that data using magrittr and dplyr Write functions with R for data science, data mining, and analytics-based applications Visualize data with ggplot2 and fit data to models using modelr Who This Book Is For Programmers new to R's data science, data mining, and analytics packages. Some prior coding experience with R in general is recommended.

Data Storytelling with Google Looker Studio

Data Storytelling with Google Looker Studio is your definitive guide to creating compelling dashboards using Looker Studio. In this book, you'll journey through the principles of effective data visualization and learn how to harness Looker Studio to convey impactful data stories. Step by step, you'll acquire the skills to design, build, and refine dashboards using real-world data. What this Book will help me do Understand and apply data visualization principles to enhance data analysis and storytelling. Master the features and capabilities of Google Looker Studio for dashboard building. Learn to use a structured 3D approach - determine, design, and develop - for creating dashboards. Explore practical examples to apply your knowledge effectively in real projects. Gain insights into monitoring and measuring the impact of Looker Studio dashboards. Author(s) Sireesha Pulipati is an accomplished data analytics professional with extensive experience in business intelligence tools and data visualization. Leveraging her years of expertise, she has crafted this book to empower readers to effectively use Looker Studio. Sireesha's approachable teaching style and practical insights make complex concepts accessible to learners. Who is it for? This book is perfect for aspiring data analysts eager to master data visualization and dashboard design. It caters to beginners and requires no prior experience, making it a great starting point. Intermediate and seasoned professionals in analytics and business intelligence who are keen on using Looker Studio effectively will find immense value as well. If you aim to create insightful dashboards and refine your data storytelling skills, this book is for you.

On today’s episode, we’re talking to Sunthar Premakumar. Sunthar is the SVP of Product at Rex, a technology, investment and real estate company. We dive into a wide range of fascinating topics, including:

How Rex got started and the problems it solves today.The importance of getting your business in front of customers early.How SaaS sales differs from traditional sales.Is it best to develop a product first or build a product around the right person?Could the “superSaaS” model eventually take over and push out individual SaaS companies?Lessons Sunthar has learned and things he would do differently.

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas  #analytics #AWS  #BI

What does analytics engineering have to do with product experimentation?

As analytics engineers, we make impact by building analytics things (models, pipelines, visualizations) that help stakeholders make decisions about what to do next. What if we could also make impact by driving a culture of experimentation—which will help those same stakeholders make decisions too?

Join Adam Stone (Netlify) as he draws on his vast experimentation experience and explains how analytics engineer can use a combination of a program-building mindset, organizational mentoring (and cheerleading), and off-the-shelf tools to partner with product and engineering teams to quickly spin up meaningful experimentation.

Check the slides here: https://docs.google.com/presentation/d/1vWfhfTnC9-NV-qrQLTkGk4qgdi-19JA8E3p6fpniQe0/edit?usp=sharing

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

Data teams: kill your service desk

I have spoken with many data people about their most hated tool: the “service desk” used to manage the requests of others.

One team has a Slack channel called “data-sucks.” The name tells you much about people’s feelings. Others use Google forms, JIRA Service Desk, or some horrifying combination.

Teams debate the ways that they can prevent, or reduce, these requests in the first place. Should we make requests more difficult? Are we stuck in the service trap? Should we enable self-service analytics?

Initiatives are undertaken, and afterwards is a terrible truth: the questions and our loathing remain. And the others? They still hate making the requests, too.

Why do we all feel this way, and why are we still unable to do anything about it?

Check the slides here: https://docs.google.com/presentation/d/1-JmXX1RZHLf3VKRZJoPHw-QFUodODzOmu13GJSVdkM4/edit?usp=sharing

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

Finding common ground, to build common ground: dbt at the enterprise

A Senior Business Analyst, an Analytics Engineer, and a Data Quality Engineer walk into the office. All three of them ask for dbt. Why? Join Coco Haury (WEX, Inc.) to hear about the universal appeal of dbt to folks outside traditional data engineer and analyst roles, and how dbt has helped WEX modernize their entire enterprise data platform.

Check the slides here: https://docs.google.com/presentation/d/1XNphR_il_3SGy9dU7G7PxmVoCXCCWBuFMvZYESN6KgE/edit?usp=sharing

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

From Excel to IDE and beyond: The origins and future of the data developer

Data teams aren't only working in Excel and Tableau anymore. We're working in GitHub, VSCode, BigQuery, and our command line. We have the modern data pipeline and we're developing a lot more like engineers...but our development workflows are anything but modern. Why don't we get developer previews? Or the ability to test our changes to anything downstream of dbt? Why do our tools not talk to each other? We believe that analytics engineers deserve better and we want to show you what "better data development" could look like in the modern data pipeline (we promise, it's really nice).

Check the slides here: https://docs.google.com/presentation/d/1NtAOknFDmJiIQD6cSTbhmfZ-6ZFhKKH3GKcEZ3AL5eg/edit?usp=sharing

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.