talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

The COVID shock forces enterprises in every market to accelerate and reshape their data analytics strategies. This trend is likely to continue. “Data Elite” enterprises survived this year through a mix of agility, efficiency, and intelligence. They met these requirements of survival as they accelerated their digital transformations, adopted cloud data platforms and embraced advanced analytics. As these data leaders continue their momentum in 2021, the data laggards will strive to catch up.

In this episode, Kevin Petrie, VP of Research at Eckerson Group, interviews Sumeet Agrawal, VP of Product Management at Informatica, to discuss the impact of COVID on enterprises. Sumeet talks about the trends of adoption during the onslaught of COVID and how enterprises are navigating in the post-pandemic era.

Introduction to Business Analytics, Second Edition

This book presents key concepts related to quantitative analysis in business. It is targeted at business students (both undergraduate and graduate) taking an introductory core course. Business analytics has grown to be a key topic in business curricula, and there is a need for stronger quantitative skills and understanding of fundamental concepts. This second edition adds material on Tableau, a very useful software for business analytics. This supplements the tools from Excel covered in the first edition, to include Data Analysis Toolpak and SOLVER.

Business Analytics for Decision Making by Pearson

Business Analytics is now a part and parcel of MBA curriculum of most institutions, as business organizations expect the new managers to have a basic knowledge of Analytics. There is also an emerging career opportunity for management graduates with deeper knowledge of Analytics. These professionals would be in Analytics roles, where business knowledge is critical. In this respect, this book will be a suitable textbook for students at postgraduate level. Beyond this, it will be a refresher material for working professionals.

Features –

  1. The book is structured to mimic stages of a typical Analytic process.
  2. This book starts with understanding business problem, data cleaning, exploratory data analysis, model building, model implementation and evaluation.
  3. An in-depth explanation is provided on the concept of ‘Modelling’
  4. The book contain many interesting caselet and box items discussing on interesting facts and figures relevant to the current industrial scenarios.
  5. Resource material for this book includes, Instructor PPT, MCQ, Data sets and Codes for practise and set of research questions to take up mini projects.
Google Data Studio for Beginners: Start Making Your Data Actionable

Google Data Studio is becoming a go-to tool in the analytics community. All business roles across the industry benefit from foundational knowledge of this now-essential technology, and Google Data Studio for Beginners is here to provide it. Release your locked-up data and turn it into beautiful, actionable, and shareable reports that can be consumed by experts and novices alike. Authors Grant Kemp and Gerry White begin by walking you through the basics, such how to create simple dashboards and interactive visualizations. As you progress through Google Data Studio for Beginners, you will build up the knowledge necessary to blend multiple data sources and create comprehensive marketing dashboards. Some intermediate features such as calculated fields, cleaning up data, and data blending to build powerhouse reports are featured as well. Presenting your data in client-ready, digestible forms is a key factor that many find to be a roadblock, and this book will help strengthen this essential skill in your organization. Centralizing the power from sources such as Google Analytics, online surveys, and a multitude of other popular data management tools puts you as a business leader and analyzer ahead of the rest. Your team as a whole will benefit from Google Data Studio for Beginners, because by using these tools, teams can collaboratively work on data to build their understanding and turn their data into action. Data Studio is quickly solidifying itself as the industry standard, and you don’t want to miss this essential guide for excelling in it. What You Will Learn Combine various data sources to create great looking and actionable visualizations Reuse and modify other dashboards that have been created by industry pros Use intermediate features such as calculated fields and data blending to build powerhouse reports Who This Book Is For Users looking to learn Google Analytics, SEO professionals, digital marketers, and other business professionals who want to mine their data into an actionable dashboard.

Organizational epistemology. Or: How do we know stuff?

For over a decade, technologists thought that the hard thing about harnessing the exploding amount of data would be about technology: how to store it all, how to process it all, how to analyze it all. Turns out that’s not the hard part. Just as in the wider world, organizations are going through an epistemic crisis: they’re having a hard time knowing what is true and what is false.

Most organizations might not have flat-earthers, fake news, and state-sponsored Twitter bots conducting information warfare, but their challenges determining what’s true are just as existential. Solving them will require good tooling, but even moreso will require a set of core values and supporting cultural norms.

In this video, Tristan Handy, CEO and co-founder of Fishtown Analytics asks: what does that future look like?

Analytics on your analytics, Drizly

Using dbt's metadata on dbt runs (run_results.json) Drizly analytics is able to track, monitor, and alert on its dbt models using Looker to visualize the data. In this video, Emily Hawkins covers how Drizly did this before, using dbt macros and inserts, and how the process was improved using run_results.json in conjunction with Dagster (and teamwork with Fishtown Analytics!)

Practical Tips to Get Started with Technical Blogging

When we invest time in writing (and speaking!) about our work, we unlock superpowers. We deepen our understanding of processes and practices. We increase efficiency by sharing important information with colleagues. We plant the seeds that help others to grow.

In this video, Janessa Lantz and Stephanie Morillo discuss why you should try technical blogging, how to get started with blogging, and tools for building your personal brand.

You will learn about: - How to pick topics/themes - Finding time in your schedule for writing - Structuring blog posts - Common mistakes and pitfalls - How to maintain momentum

Learn more about Stephanie Morillo at: https://www.stephaniemorillo.co/

Learn more about dbt at: https://getdbt.com https://twitter.com/getdbt

Learn more about Fishtown Analytics at: https://fishtownanalytics.com https://twitter.com/fishtowndata https://www.linkedin.com/company/fishtown-analytics/

The Future of the Data Warehouse

Almost all of us are using our data warehouse to power our business intelligence, what if we could use data warehouses do even more?

What if we could use data warehouses to power internal tooling, machine learning, behavioral analytics, or even customer-facing products?

Is this a future we're heading for, and if so, how do we get there?

In this video, you'll join a discussion with speakers: - Boris Jabes, CEO of Census - Jeremy Levy, CEO of Indicative - Arjun Narayan, CEO of Materialize - Jennifer Li, Partner at a16z as moderator

Learn more about the speakers and their companies at: https://www.getcensus.com/ https://www.indicative.com/ https://materialize.com/ https://a16z.com/

Learn more about dbt at: https://getdbt.com https://twitter.com/getdbt

Learn more about Fishtown Analytics at: https://fishtownanalytics.com https://twitter.com/fishtowndata https://www.linkedin.com/company/fishtown-analytics/

The Importance of Mastering the Basics of Data Analysis

There are many ways to do data analysis depending on the needs of the business, the background and experience of the data analyst, and more.

But one thing's for certain: really good data analysis comes down the mastering the basics.

In this video, Kenny Ning (previously at Better.com) takes inspiration from sushi chefs' mastery of making sushi and applies those concepts to data analysis.

You'll learn about the critical concepts to keep your data platform clean and ready for analysis:

  1. Know your ingredients = Know where your data comes from
  2. Record your recipes = Standardize common logic and documentation
  3. Master egg sushi = Focus on the basics of data analysis first

Learn more about dbt at: https://getdbt.com https://twitter.com/getdbt

Learn more about Fishtown Analytics at: https://fishtownanalytics.com https://twitter.com/fishtowndata https://www.linkedin.com/company/fishtown-analytics/

How JetBlue Secures and Protects Data Using dbt and Snowflake

You probably have customer data in your data warehouse — it's a must-have for understanding a business.

This data very likely includes personally identifiable information (PII) which shouldn't be shared with the entire organization.

How do you protect that data and make sure only authorized employees can see that sensitive information?

In this video, you'll learn from Ashley Van Name how JetBlue approaches data protection, particularly the problem of masking PII at scale by leveraging Snowflake's data masking features straight from their dbt project.

Learn more about dbt at: https://getdbt.com https://twitter.com/getdbt

Learn more about Fishtown Analytics at: https://fishtownanalytics.com https://twitter.com/fishtowndata https://www.linkedin.com/company/fishtown-analytics/

How to Audit Your Directed Acyclic Graph (DAG) and Create Modular Data Models

In a world where creating new models in as easy as creating new files, and creating links between those models is as easy as typing ref, a directed acyclic graph (DAG) can get pretty unwieldy!

A complex DAG makes it difficult to understand the upstream and downstream dependencies of a particular table.

The goal is to create a modular data model using staging models (base_, stg_) and marts models (int_, dim_, fct_).

In this video, Christine Berger of Fishtown Analytics will teach you how to apply the concepts of layering and modularity to your dbt project, all with a fun kitchen metaphor to keep things fresh!

Learn more about dbt at: https://getdbt.com https://twitter.com/getdbt

Learn more about Fishtown Analytics at: https://fishtownanalytics.com https://twitter.com/fishtowndata https://www.linkedin.com/company/fishtown-analytics/

How to Version Control Your Metrics to Create a Single Source of Truth for Business Metrics

What happens when two people come to a meeting to talk about business metrics but they have different values for the same metric?

That meeting ends up being spent discussing how the metric was calculated rather than how to impact it.

In this video, you'll learn how the Fishtown Analytics team uses dbt to version control business metrics and create a single source of truth.

You'll also get a framework for how to implement version control for metrics at your organization.

Learn more about dbt at: https://getdbt.com https://twitter.com/getdbt

Learn more about Fishtown Analytics at: https://fishtownanalytics.com https://twitter.com/fishtowndata https://www.linkedin.com/company/fishtown-analytics/

How to Scale Data Teams with Data Clinics and Balance Short-Term and Long-Term Projects

You’re in a state of flow, building out dbt models and then you get the dreaded message — "Quick question about this data..."

As a data team, how do you balance the roadmap work against those "quick" questions?

How do you prioritize all the work you need to do in the short-term (backlog items) while also working on your long-term projects (roadmap items)?

There are advantages to both backlog and roadmap items. How can data teams get the advantages of both?

In this video, Jacob Frackson will show how Data Clinics dedicated time put aside to work on these requests, can help your data team achieve this balance and empower self-serve along the way.

Data clinics have helped an organization: - Deliver 80% of Sprint Points - Answer up to 8 data questions per day - 10x weekly self-serve users on BI tools

Learn more about dbt at: https://getdbt.com https://twitter.com/getdbt

Learn more about Fishtown Analytics at: https://fishtownanalytics.com https://twitter.com/fishtowndata https://www.linkedin.com/company/fishtown-analytics/

How JetBlue became a data-driven airline using dbt

What does a data-driven airline look like? How does a data-driven airline behave and treat customers?

JetBlue believes a data-driven airline should: - Offer personalized customer interactions - Predict delays and other "irregular" operations - Enable all analysts to easily access a variety of data sources - Study and monitor operations in real-time to make smarter decisions

The big question is... how does an airline become more data driven?

In this video, Ashley Van Name shares how a small team of data engineers at JetBlue successfully migrated their entire data warehouse workload to dbt and shares tips for setting yourself up for success with dbt.

Fun fact about JetBlue's dbt project — they have 1800 data models, on top of 280 data sources, have defined 8500 tests and they built their entire dbt project in six months!

Learn more about dbt at: https://getdbt.com https://twitter.com/getdbt

Learn more about Fishtown Analytics at: https://fishtownanalytics.com https://twitter.com/fishtowndata https://www.linkedin.com/company/fishtown-analytics/

How to Map the Customer Journey from a Product Perspective Using dbt

In this talk, you'll learn how the team at TULA Skincare took a product perspective to the customer journey to understand how customers progress from. basic products to more advanced ones.

It's important to map out the customer journey to understand where they get stuck, where they need help, where the business can improve.

However, when folx talk about mapping a customer’s journey, it's typically only from a marketing perspective. Which channels brought a customer into the funnel? How did they end up converting?

This is important, but that only covers the beginning of the journey where they become a customer. What about the rest of the customer journey where they begin to use your product(s) then go on to buy from you again and again?

What does that customer journey look like?

In this video, Sanjana Sen and Grant Winship of Fishtown Analytics talk through how they approached this exercise while working with the TULA team.

Learn more about dbt at: https://getdbt.com https://twitter.com/getdbt

Learn more about Fishtown Analytics at: https://fishtownanalytics.com https://twitter.com/fishtowndata https://www.linkedin.com/company/fishtown-analytics/

Introduction to dbt (data build tool) from Fishtown Analytics

In this introduction to dbt tutorial, you'll to learn about the core concepts of dbt and how it's used.

You probably know that data is a huge part of how the world runs now, including how businesses report on metrics and how they operate.

One of the difficult parts of working with data is communicating enough context and information to everyone in the organization so they understand the data they're looking at and whether it answers their questions.

That's where dbt comes in. dbt is a data transformation and documentation tool that helps data analysts, data engineers, and business stakeholders collaborate on data.

This introduction to dbt will walk you through: a short history of ELT, what is dbt (data build tool), and dbt core concepts.

The core dbt concepts include: - Expressing transforms with SQL select - Automatically build the DAG with ref(s) - Tests ensure model accuracy - Documentation is accessible and easily updated - Use macros to write reusable SQL

Learn more about dbt at: https://getdbt.com https://twitter.com/getdbt

Learn more about dbt Labs (formerly Fishtown Analytics) at: https://www.getdbt.com/dbt-labs/about-us/ https://twitter.com/dbt_labs https://www.linkedin.com/company/dbtlabs

podcast_episode
by Mico Yuk (Data Storytelling Academy)
BI

Excited to share the last and final part of my three-episode series on BI data storytelling accelerator lessons learned. We'll be digging into the last but most exciting step in our BI Data Storytelling Mastery Framework, 'What you Draw'. Useless visualizations are everywhere! This episode will give you seven things to avoid and ways to fix them to ensure you bring your A-game when it comes to visualizing your storyboard. Tune in for knowledge bombs galore!

[05:53] Never skip the mock-up stage: 90% of us make the mistake of getting directly into drawing without a mock-up. Always make sure you a client signs off before you start on the mock-up and you have some sample data [09:34] Doing mock-up before getting a sign off on the storyboard and the analytics data dictionary. Without a sign-off, you should never go into the drawing board [11:44] Starting from scratch all the time: Nothing wastes more time than having to start from scratch all the time. To save on time, always treat every project you build like an asset. Every template you build as a team should be repurposed as a template for the users. For full show notes, and the links mentioned visit: https://bibrainz.com/podcast/72 Enjoyed the Show?  Please leave us a review on iTunes.

Summary Building data products are complicated by the fact that there are so many different stakeholders with competing goals and priorities. It is also challenging because of the number of roles and capabilities that are necessary to go from idea to delivery. Different organizations have tried a multitude of organizational strategies to improve the success rate of these data teams with varying levels of success. In this episode Jesse Anderson shares the lessons that he has learned while working with dozens of businesses across industries to determine the team structures and communication styles that have generated the best results. If you are struggling to deliver value from big data, or just starting down the path of building the organizational capacity to turn raw information into valuable products then this is a conversation that you don’t want to miss.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise. When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta. Today’s episode of the Data Engineering Podcast is sponsored by Datadog, a SaaS-based monitoring and analytics platform for cloud-scale infrastructure, applications, logs, and more. Datadog uses machine-learning based algorithms to detect errors and anomalies across your entire stack—which reduces the time it takes to detect and address outages and helps promote collaboration between Data Engineering, Operations, and the rest of the company. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial. If you start a trial and install Datadog’s agent, Datadog will send you a free T-shirt. Your host is Tobias Macey and today I’m interviewing Jesse Anderson about best practices for organizing and managing data teams

Interview

Introduction How did you get involved in the area of data management? Can you start by giving an overview of how you view the mission and responsibilities of a data team?

What are the critical elements of a successful data team? Beyond the core pillars of data science, data engineering, and operations, what other specialized roles do you find hel

Summary The first stage of every good pipeline is to perform data integration. With the increasing pace of change and the need for up to date analytics the need to integrate that data in near real time is growing. With the improvements and increased variety of options for streaming data engines and improved tools for change data capture it is possible for data teams to make that goal a reality. However, despite all of the tools and managed distributions of those streaming engines it is still a challenge to build a robust and reliable pipeline for streaming data integration, especially if you need to expose those capabilities to non-engineers. In this episode Ido Friedman, CTO of Equalum, explains how they have built a no-code platform to make integration of streaming data and change data capture feeds easier to manage. He discusses the challenges that are inherent in the current state of CDC technologies, how they have architected their system to integrate well with existing data platforms, and how to build an appropriate level of abstraction for such a complex problem domain. If you are struggling with streaming data integration and change data capture then this interview is definitely worth a listen.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise. When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask. Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unloc