talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

The Care and Feeding of Data Scientists

As a discipline, data science is relatively young, but the job of managing data scientists is younger still. Many people undertake this management position without the tools, mentorship, or role models they need to do it well. This report examines the steps necessary to build, manage, sustain, and retain a growing data science team. You’ll learn how data science management is similar to but distinct from other management types. Michelangelo D’Agostino, VP of Data Science and Engineering at ShopRunner, and Katie Malone, Director of Data Science at Civis Analytics, provide concrete tips for balancing and structuring a data science team. The authors provide tips for balancing and structuring a data science team, recruiting and interviewing the best candidates, and keeping them productive and happy once they're in place. In this report, you'll: Explore data scientist archetypes, such as operations and research, that fit your organization Devise a plan to recruit, interview, and hire members for your data science team Retain your hires by providing challenging work and learning opportunities Explore Agile and OKR methodology to determine how your team will work together Provide your team with a career ladder through guidance and mentorship

In this episode of Analytics on Fire, I talk with one of my mentors and best friends Ismail Maiyegun. Ismail, the CTO and co-founder of Hingeto, a Y-combinator funded and fast-growing Silicon Valley startup, shares how they use business intelligence to scale their high seven-figure annual re-occurring revenue! He also demystifies the hype around artificial intelligence in silicon valley and shares how they focus on what's important.

Sponsor

This exciting season of AOF is sponsored by our BI Data Storytelling Mastery Accelerator 3-Day Live workshops. Many BI teams are still struggling to deliver consistent, high-engaging analytics their users love. At the end of two days, you'll leave with a clear BI delivery action plan for your BI team.  Join us!

Enjoyed the Show? Please leave us a review on iTunes.   For all links and resources mentioned visit: https://bibrainz.com/podcast/26

podcast_episode
by Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Maryam Jahanshahi (TapRecruit) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

What's in a job title? That which we call a senior data scientist by any other job title would model as predictively...  This, dear listener, is why the hosts of this podcast crunch data rather than dabble in iambic pentameter. With sincere apologies to William Shakespeare, we sat down with Maryam Jahanshahi to discuss job titles, job descriptions, and the research, experiments, and analysis that she has conducted as a research scientist at TapRecruit, specifically relating to data science and analytics roles. The discussion was intriguing and enlightening! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Summary Building and maintaining a data lake is a choose your own adventure of tools, services, and evolving best practices. The flexibility and freedom that data lakes provide allows for generating significant value, but it can also lead to anti-patterns and inconsistent quality in your analytics. Delta Lake is an open source, opinionated framework built on top of Spark for interacting with and maintaining data lake platforms that incorporates the lessons learned at DataBricks from countless customer use cases. In this episode Michael Armbrust, the lead architect of Delta Lake, explains how the project is designed, how you can use it for building a maintainable data lake, and some useful patterns for progressively refining the data in your lake. This conversation was useful for getting a better idea of the challenges that exist in large scale data analytics, and the current state of the tradeoffs between data lakes and data warehouses in the cloud.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! And to keep track of how your team is progressing on building new pipelines and tuning their workflows, you need a project management system designed by engineers, for engineers. Clubhouse lets you craft a workflow that fits your style, including per-team tasks, cross-project epics, a large suite of pre-built integrations, and a simple API for crafting your own. With such an intuitive tool it’s easy to make sure that everyone in the business is on the same page. Data Engineering Podcast listeners get 2 months free on any plan by going to dataengineeringpodcast.com/clubhouse today and signing up for a free trial. Support the show and get your data projects in order! You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Coming up this fall is the combined events of Graphorum and the Data Architecture Summit. The agendas have been announced and super early bird registration for up to $300 off is available until July 26th, with early bird pricing for up to $200 off through August 30th. Use the code BNLLC to get an additional 10% off any pass when you register. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Michael Armbrust about Delta Lake, an open source storage layer that brings ACID transactions to Apache Spark and big data workloads.

Interview

Introduction How did you get involved in the area of data m

Getting Started with Tableau 2019.2 - Second Edition

"Getting Started with Tableau 2019.2" is your primer to mastering the latest version of Tableau, a leading tool for data visualization and analysis. Whether you're new to Tableau or looking to upgrade your skills, this book will guide you through both foundational and advanced features, enabling you to create impactful dashboards and visual analytics. What this Book will help me do Understand and utilize the latest features introduced in Tableau 2019.2, including natural language queries in Ask Data. Learn how to connect to diverse data sources, transform data by pivoting fields, and split columns effectively. Gain skills to design intuitive data visualizations and dashboards using various Tableau mark types and properties. Develop interactive and storytelling-based dashboards to communicate insights visually and effectively. Discover methods to securely share your analyses through Tableau Server, enhancing collaboration. Author(s) Tristan Guillevin is an experienced data visualization consultant and an expert in Tableau. Having helped several organizations adopt Tableau for business intelligence, he brings a practical and results-oriented approach to teaching. Tristan's philosophy is to make data accessible and actionable for everyone, no matter their technical background. Who is it for? This book is ideal for Tableau users and data professionals looking to enhance their skills on Tableau 2019.2. If you're passionate about uncovering insights from data but need the right tools to communicate and collaborate effectively, this book is for you. It's suited for those with some prior experience in Tableau but also offers introductory content for newcomers. Whether you're a business analyst, data enthusiast, or BI professional, this guide will build solid foundations and sharpen your Tableau expertise.

Ever wonder about the masterminds that create the famous Gartner Analytics and BI Magic Quadrant? Meet Cindi Howson, former VP at Gartner and current Chief Data Strategy Officer at ThoughSpot. Cindi's unorthodox start in the BI world and quick rise to well-known author, BI Scorecard creator and go to for the media, is nothing short of spectacular. Find out why Cindi is one of the most inspirational and well-known women in BI!  

Sponsor

This exciting season of AOF is sponsored by our BI Data Storytelling Mastery Accelerator 3-Day Live workshops. Many BI teams are still struggling to deliver consistent, high-engaging analytics their users love. At the end of two days, you'll leave with a clear BI delivery action plan for your BI team.  Join us!

Enjoyed the Show? Please leave us a review on iTunes.   For all links and resources mentioned visit: https://bibrainz.com/podcast/25

Managing Your Data Science Projects: Learn Salesmanship, Presentation, and Maintenance of Completed Models

At first glance, the skills required to work in the data science field appear to be self-explanatory. Do not be fooled. Impactful data science demands an interdisciplinary knowledge of business philosophy, project management, salesmanship, presentation, and more. In Managing Your Data Science Projects, author Robert de Graaf explores important concepts that are frequently overlooked in much of the instructional literature that is available to data scientists new to the field. If your completed models are to be used and maintained most effectively, you must be able to present and sell them within your organization in a compelling way. The value of data science within an organization cannot be overstated. Thus, it is vital that strategies and communication between teams are dexterously managed. Three main ways that data science strategy is used in a company is to research its customers, assess risk analytics, and log operational measurements. These all require different managerial instincts, backgrounds, and experiences, and de Graaf cogently breaks down the unique reasons behind each. They must align seamlessly to eventually be adopted as dynamic models. Data science is a relatively new discipline, and as such, internal processes for it are not as well-developed within an operational business as others. With Managing Your Data Science Projects, you will learn how to create products that solve important problems for your customers and ensure that the initial success is sustained throughout the product’s intended life. Your users will trust you and your models, and most importantly, you will be a more well-rounded and effectual data scientist throughout your career. Who This Book Is For Early-career data scientists, managers of data scientists, and those interested in entering the fieldof data science

Are you a data professional struggling to influence others with your data savvy? Wondering what may be missing? If you're one of those data professionals who is seeking a proven way to influence your users and win their trust, this BI masterclass is for you! Ryan Deeds, a classified BI 'rockstar', is yet another example of an 'action taker' who took our BI Data Storytelling Mastery workshop and completely transformed his career. Today, Ryan is the VP of Data Technology at Assurex Global, but he didn't start out there! Listen to how Ryan went from a normal data professional to a chief data storyteller in a few months!   Sponsor

This exciting season of AOF is sponsored by our BI Data Storytelling Mastery Accelerator 2-Day Live workshops. Many BI teams are still struggling to deliver consistent, high-engaging analytics their users love. At the end of two days, you'll leave with a clear BI delivery action plan for your BI team.  Join us!

Enjoyed the Show? Please leave us a review on iTunes.   For all links and resources mentioned visit: https://bibrainz.com/podcast/24

Stream Processing with Apache Spark

Before you can build analytics tools to gain quick insights, you first need to know how to process data in real time. With this practical guide, developers familiar with Apache Spark will learn how to put this in-memory framework to use for streaming data. You’ll discover how Spark enables you to write streaming jobs in almost the same way you write batch jobs. Authors Gerard Maas and François Garillot help you explore the theoretical underpinnings of Apache Spark. This comprehensive guide features two sections that compare and contrast the streaming APIs Spark now supports: the original Spark Streaming library and the newer Structured Streaming API. Learn fundamental stream processing concepts and examine different streaming architectures Explore Structured Streaming through practical examples; learn different aspects of stream processing in detail Create and operate streaming jobs and applications with Spark Streaming; integrate Spark Streaming with other Spark APIs Learn advanced Spark Streaming techniques, including approximation algorithms and machine learning algorithms Compare Apache Spark to other stream processing projects, including Apache Storm, Apache Flink, and Apache Kafka Streams

Remember that time you ran a lunch-and-learn at your company to show a handful of co-workers some Excel tips? What would have happened if you actually needed to fully train them on Excel, and there were approximately a gazillion users*? Or, have you ever watched a Google Analytics or Google Tag Manager training video? Or perused their documentation? How does Google actually think about educating a massive and diverse set of users on their platform? And, what can we learn from that when it comes to educating our in-house users on tool, processes, and concepts? In this episode, Justin Cutroni from Google joined the gang to discuss this very topic! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Applied Supervised Learning with R

Applied Supervised Learning with R equips you with the essential knowledge and practical skills to leverage machine learning techniques for solving business problems using R. With this book, you'll gain hands-on experience in implementing various supervised learning models, assessing their performance, and selecting the best-suited method for your objectives. What this Book will help me do Gain expertise in identifying and framing business problems suitable for supervised learning. Acquire skills in data wrangling and visualization using R packages like dplyr and ggplot2. Master techniques for tuning hyperparameters to optimize machine learning models. Understand methods for feature selection and dimensionality reduction to enhance model performance. Learn how to deploy machine learning models to production environments, such as AWS Lambda. Author(s) Karthik Ramasubramanian and Jojo Moolayil are both seasoned data science practitioners and educators who bring a wealth of experience in machine learning and analytics. With a deep understanding of R and its applications in real-world scenarios, they offer practical insights and actionable examples to their readers. Their teaching style focuses on clarity and practical application. Who is it for? This book is ideal for data analysts, data scientists, and data engineers at a beginner to intermediate level who aim to master supervised machine learning with R. Readers should have basic knowledge of statistics, probabilities, and R programming. It is designed for those eager to apply machine learning techniques to real-world problems and improve their decision-making capabilities.

Hands-On Time Series Analysis with R

Dive into the intricacies of time series analysis and forecasting with R in this comprehensive guide. From foundational concepts to practical implementations, this book equips you with the tools and techniques to analyze, understand, and predict time-dependent data. What this Book will help me do Develop insights by visualizing time-series data and identifying patterns. Master statistical time-series concepts including autocorrelation and moving averages. Learn and implement forecasting models like ARIMA and exponential smoothing. Apply machine learning methodologies for advanced time-series predictions. Work with key R packages for cleaning, manipulating, and analyzing time-series data. Author(s) Rami Krispin is an accomplished statistician and R programmer with extensive experience in data analysis and time-series modeling. His hands-on approach in utilizing R packages and libraries brings clarity to complex time-series concepts. With a passion for teaching and simplifying intricate topics, Rami ensures readers both grasp the theories and apply them effectively. Who is it for? This book is ideal for data analysts, statisticians, and R developers interested in mastering time-series analysis for real-world applications. Designed for readers with a basic understanding of statistics and R programming, it offers a practical approach to learning effective forecasting and data visualization techniques. Professionals aiming to expand their skillset in predictive analytics will find it particularly beneficial.

Learning Elastic Stack 7.0 - Second Edition

"Learning Elastic Stack 7.0" introduces you to the tools and techniques of Elastic Stack, covering Elasticsearch, Logstash, Beats, and Kibana. With clear explanations and practical examples, this book helps you grasp the 7.0 version's new features and capabilities, empowering you to build and deploy robust, real-time data processing applications. What this Book will help me do Gain the necessary skills to install and configure Elastic Stack for professional use. Master the data handling capabilities of Elasticsearch for distributed search and analytics. Develop expertise in creating data pipelines with Logstash and other ingestion tools. Learn to utilize Kibana to visualize and interpret complex datasets. Acquire knowledge of deploying Elastic Stack solutions both on-premise and in cloud environments. Author(s) Pranav Shukla and Sharath Kumar M N are experienced software engineers and data professionals with a profound knowledge of databases, distributed systems, and cloud architectures. They specialize in educating developers through structured guidance and proven methodologies related to data handling and visualization. Who is it for? This book is designed for software engineers, data analysts, and technical architects interested in learning the Elastic Stack tools from the ground up. Readers familiar with database concepts but new to Elastic Stack will find this book particularly helpful. Advanced users seeking to understand the updates in Elastic Stack 7.0 are also a complementary audience. If you wish to apply Elastic Stack to real-time data processing and analytics, this book provides a strong foundation.

Have you ever thought about starting your own data community? Ever wondered what really goes into it? Data communities are hot and have become the number one source for learning and networking! Our guest today talks to us about exactly how he grew Experian's online data community from zero to 200K followers in two years. Mike Delgado is the host of the popular Experian #DataTalk Show, where he interviews the top data scientist and data rockstars via facebook live on the globe. In this BI masterclass, he shares his unique start in data, how Experian's switch to being data-driven was a game-changer, and how he continues to grow the #DataTalk show amidst high competition.  You don't want to miss it!      Sponsor

This exciting season of AOF is sponsored by our BI Data Storytelling Mastery Accelerator 2-Day Live workshops. Many BI teams are still struggling to deliver consistent, high-engaging analytics their users love. At the end of two days, you'll leave with a clear BI delivery action plan for your BI team.  Join us!

Enjoyed the Show? Please leave us a review on iTunes.   For all links and resources mentioned visit: https://bibrainz.com/podcast/23

Summary Some problems in data are well defined and benefit from a ready-made set of tools. For everything else, there’s Pachyderm, the platform for data science that is built to scale. In this episode Joe Doliner, CEO and co-founder, explains how Pachyderm started as an attempt to make data provenance easier to track, how the platform is architected and used today, and examples of how the underlying principles manifest in the workflows of data engineers and data scientists as they collaborate on data projects. In addition to all of that he also shares his thoughts on their recent round of fund-raising and where the future will take them. If you are looking for a set of tools for building your data science workflows then Pachyderm is a solid choice, featuring data versioning, first class tracking of data lineage, and language agnostic data pipelines.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! Alluxio is an open source, distributed data orchestration layer that makes it easier to scale your compute and your storage independently. By transparently pulling data from underlying silos, Alluxio unlocks the value of your data and allows for modern computation-intensive workloads to become truly elastic and flexible for the cloud. With Alluxio, companies like Barclays, JD.com, Tencent, and Two Sigma can manage data efficiently, accelerate business analytics, and ease the adoption of any cloud. Go to dataengineeringpodcast.com/alluxio today to learn more and thank them for their support. Understanding how your customers are using your product is critical for businesses of any size. To make it easier for startups to focus on delivering useful features Segment offers a flexible and reliable data infrastructure for your customer analytics and custom events. You only need to maintain one integration to instrument your code and get a future-proof way to send data to over 250 services with the flip of a switch. Not only does it free up your engineers’ time, it lets your business users decide what data they want where. Go to dataengineeringpodcast.com/segmentio today to sign up for their startup plan and get $25,000 in Segment credits and $1 million in free software from marketing and analytics companies like AWS, Google, and Intercom. On top of that you’ll get access to Analytics Academy for the educational resources you need to become an expert in data analytics for measuring product-market fit. You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave

Send us a text This week host Al Martin welcomes Greg Bonnette, who is Vice President of Data & Analytics Strategy at Ironside. He comes on the show to discuss more in depth about his role engaging with clients, striving to find appropriate business solutions to their data problems. Discover typical industry issues and hear how Greg overcomes, in this week's episode.

Check us out on: - YouTube - Apple Podcasts - Google Play Music - Spotify - TuneIn - Stitcher  Show Notes

00:10 - Connect with Producer Steve Moore on LinkedIn and Twitter.    00:15 - Connect with Producer Liam Seston on LinkedIn and Twitter.    00:20 - Connect with Producer Rachit Sharma on LinkedIn. 00:25 - Connect with Host Al Martin on LinkedIn and Twitter. 00:40 - Connect with Greg on LinkedIn and Twitter. 04:25 – What is Ironside? 21:27 - Learn about Watson Machine Learning and A.I. Modelling. 41:39 - Learn about Daniel Kahneman and his book, Thinking Fast or Slow. Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Have you ever wondered what really happens in a $10M+ analytics transformation? Surely it's more than building analytics. The stakes are higher and so are the risks! Well, my good friend Andrew Joo, who is is the head of Energy and Utilities practice for Infosys Consulting is a master of making these "unicorn" programs successful. Whether your project is big or small, there is something for everyone to learn!  

Sponsor

This exciting season of AOF is sponsored by our BI Data Storytelling Mastery Accelerator 2-Day Live workshops. Many BI teams are still struggling to deliver consistent, high-engaging analytics their users love. At the end of two days, you'll leave with a clear BI delivery action plan for your BI team.  Join us!

Enjoyed the Show? Please leave us a review on iTunes.   For all links and resources mentioned visit: https://bibrainz.com/podcast/22

A simple recipe for a delicious analytics platform: combine 3 cups of data schema with a pinch of JavaScript in a large pot of cloud storage. Bake in the deployment oven for a couple of months, and savory insights will emerge. Right? Why does this recipe have both 5-star and 1-star ratings?! On this episode, long-standing digital analytics maven June Dershewitz, Director of Analytics at Twitch, drops by the podcast's analytics kitchen to discuss the relative merits of building versus buying an analytics platform. Or, of course, doing something in between!

The episode was originally 3.5 hours long, but we edited out most of Michael's tangents into gaming geekdown, which brought the run-time down to a more normal length.

For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Summary In recent years the traditional approach to building data warehouses has shifted from transforming records before loading, to transforming them afterwards. As a result, the tooling for those transformations needs to be reimagined. The data build tool (dbt) is designed to bring battle tested engineering practices to your analytics pipelines. By providing an opinionated set of best practices it simplifies collaboration and boosts confidence in your data teams. In this episode Drew Banin, creator of dbt, explains how it got started, how it is designed, and how you can start using it today to create reliable and well-tested reports in your favorite data warehouse.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! Understanding how your customers are using your product is critical for businesses of any size. To make it easier for startups to focus on delivering useful features Segment offers a flexible and reliable data infrastructure for your customer analytics and custom events. You only need to maintain one integration to instrument your code and get a future-proof way to send data to over 250 services with the flip of a switch. Not only does it free up your engineers’ time, it lets your business users decide what data they want where. Go to dataengineeringpodcast.com/segmentio today to sign up for their startup plan and get $25,000 in Segment credits and $1 million in free software from marketing and analytics companies like AWS, Google, and Intercom. On top of that you’ll get access to Analytics Academy for the educational resources you need to become an expert in data analytics for measuring product-market fit. You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Drew Banin about DBT, the Data Build Tool, a toolkit for building analytics the way that developers build applications

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what DBT is and your motivation for creating it? Where does it fit in the overall landscape of data tools and the lifecycle of data in an analytics pipeline? Can you talk through the workflow for someone using DBT? One of the useful features of DBT for stability of analytics is the ability to write and execute tests. Can you explain how those are implemented? The packaging capabilities are beneficial for enabling collaboration. Can you talk through how the packaging system is implemented?

Are these packages driven by Fishtown Analytics or the dbt community?

What are the limitations of modeling everything as a SELECT statement? Making SQL code reusable is notoriously difficult. How does the Jinja templating of DBT address this issue and what are the shortcomings?

What are your thoughts on higher level approaches to SQL that compile down to the specific statements?

Can you explain how DBT is implemented and how the design has evolved since you first began working on it? What are some of the features of DBT that are often overlooked which you find particularly useful? What are some of the most interesting/unexpected/innovative ways that you have seen DBT used? What are the additional features that the commercial version of DBT provides? What are some of the most useful or challenging lessons that you have learned in the process of building and maintaining DBT? When is it the wrong choice? What do you have planned for the future of DBT?

Contact Info

Email @drebanin on Twitter drebanin on GitHub

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

DBT Fishtown Analytics 8Tracks Internet Radio Redshift Magento Stitch Data Fivetran Airflow Business Intelligence Jinja template language BigQuery Snowflake Version Control Git Continuous Integration Test Driven Development Snowplow Analytics

Podcast Episode

dbt-utils We Can Do Better Than SQL blog post from EdgeDB EdgeDB Looker LookML

Podcast Interview

Presto DB

Podcast Interview

Spark SQL Hive Azure SQL Data Warehouse Data Warehouse Data Lake Data Council Conference Slowly Changing Dimensions dbt Archival Mode Analytics Periscope BI dbt docs dbt repository

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

Send us a text How do you provide answers to clients prior to them asking? What do you do with an abundance of client data? In this episode of Making Data Simple, Tracy Bolot, Director of Digital Client Support for Analytics at IBM, talks about how to maximize teamwork and strengths to enrich your clients' experience. Show Notes 00:15 Connect with Al Martin on Twitter (@amartin_v) and LinkedIn (linkedin.com/in/al-martin-ku) 00:30 Connect with Tracy Bolot on LinkedIn(linkedin.com/in/tracy-bolot-992a6b4a) 01:30 Link to Global Elite? 04:00 Lean more about the Informix Software 13:40 Check out Turbotax 27:20 Click here to learn more about information sharing and governance at IBM  Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.