Is your company good at customer success and retention? Chances are that you could be better. For most businesses with a recurring revenue model, customer churn is a very costly affair. Whenever a customer leaves, you lose out on recurring revenue, forgo the opportunity of expansion (cross sell) revenue and have to pay for another round of acquisition costs to cover the loss. In my personal experience, customer retention is both art and science. Machine learning and other data science techniques can be used to identify customers who are likely to churn, but it is equally important to craft meaningful and delightful interactions throughout the customer lifecycle. So, what’s required to become a lean, mean retention machine? In this episode of Leaders of Analytics, I speak to Sami Kaipa to learn the best practices of data-driven customer retention. Sami is an experienced technology executive, serial entrepreneur and start-up advisor. He is co-founder of Tingono, an AI-driven customer retention platform. Listen to this episode as we discuss: Sami's journey as an entrepreneur and corporate technology executiveThe core elements of customer success and retention that every business should masterA deep dive into the concepts of customer retention, expansion and NRRThe economics of customer retention and expansionHow data science and machine learning can help with retention, and much more.Connect with Sami on LinkedIn: https://www.linkedin.com/in/samkaipa/ Tingono's blog: https://www.tingono.com/blog
talk-data.com
Topic
Analytics
4552
tagged
Activity Trend
Top Events
Nick Bunker, Economic Research Director for North America at the Indeed Hiring Lab and Adam Ozimek, Chief Economist at EIG, join the podcast to provide a labor market outlook. Full episode transcript Follow Mark Zandi @MarkZandi, Cris deRitis @MiddleWayEcon, and Marisa DiNatale on LinkedIn for additional insight
Questions or Comments, please email us at [email protected]. We would love to hear from you. To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
What does it mean to "be technical"? What makes a great analytics engineer? How can individuals "develop technically", how can managers "foster technical growth", and how can companies "hire technical people"? It's crucial to understand the component skills that build into great analytics engineering outcomes.
As it turns out, it's not so different from how fashion designers go from prompt to runway look. Join Ashley Sherwood (HubSpot) as she breaks down the parallels between fashion design and analytics engineering work and how small daily design decisions can compound to a massive impact on data teams' abilities to grow their skills and serve stakeholders.
Check the slides here: https://docs.google.com/presentation/d/1HDzAzHhWy4q_cXASB1F3EfTWivWlijCRu5e-CIbaXqI/edit?usp=sharing
Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.
Data Analytics has played a major role in Chelsea’s journey to becoming the seventh most valuable football club in the world, Chelsea has won six league titles, eight FA Cups, five League Cups, and two Champions League titles.
Today, we are going behind the scenes at Chelsea FC to see how they use data analytics to analyze matches, inform tactical decision-making, and drive matchday success in one of the world’s top football leagues, just in time for the 2022 FIFA World Cup in Qatar!
Federico Bettuzzi is a Data Scientist at Chelsea FC. As a specialist in match analytics, Federico works with Chelsea’s first team to inform tactical decision making during matches. Federico joins the show to break down how he gathers and synthesizes data, how they develop match analyses for tactical reviews, how managers prioritize data analytics differently, how to balance long-term and short-term projects, and much more.
There are many models for bridging business and technical teams. These models can be more centralized or decentralized in nature, depending on the culture of the organization and nature of the business domain. Each requires a strong enterprise data teams comprised of multiple departments and roles. Published at: https://www.eckerson.com/articles/an-operating-model-for-data-analytics-part-iii-team-composition-and-dynamics
Summary The majority of blog posts and presentations about data engineering and analytics assume that the consumers of those efforts are internal business users accessing an environment controlled by the business. In this episode Ian Schweer shares his experiences at Riot Games supporting player-focused features such as machine learning models and recommeder systems that are deployed as part of the game binary. He explains the constraints that he and his team are faced with and the various challenges that they have overcome to build useful data products on top of a legacy platform where they don’t control the end-to-end systems.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. The biggest challenge with modern data systems is understanding what data you have, where it is located, and who is using it. Select Star’s data discovery platform solves that out of the box, with an automated catalog that includes lineage from where the data originated, all the way to which dashboards rely on it and who is viewing them every day. Just connect it to your database/data warehouse/data lakehouse/whatever you’re using and let them do the rest. Go to dataengineeringpodcast.com/selectstar today to double the length of your free trial and get a swag package when you convert to a paid plan. Data engineers don’t enjoy writing, maintaining, and modifying ETL pipelines all day, every day. Especially once they realize 90% of all major data sources like Google Analytics, Salesforce, Adwords, Facebook, Spreadsheets, etc., are already available as plug-and-play connectors with reliable, intuitive SaaS solutions. Hevo Data is a highly reliable and intuitive data pipeline platform used by data engineers from 40+ countries to set up and run low-latency ELT pipelines with zero maintenance. Boasting more than 150 out-of-the-box connectors that can be set up in minutes, Hevo also allows you to monitor and control your pipelines. You get: real-time data flow visibility, fail-safe mechanisms, and alerts if anything breaks; preload transformations and auto-schema mapping precisely control how data lands in your destination; models and workflows to transform data for analytics; and reverse-ETL capability to move the transformed data back to your business software to inspire timely action. All of this, plus its transparent pricing and 24*7 live support, makes it consistently voted by users as the Leader in the Data Pipeline category on review platforms like G2. Go to dataengineeringpodcast.com/hevodata and sign up for a free 14-day trial that also comes with 24×7 support. Your host is Tobias Mac
Summary The problems that are easiest to fix are the ones that you prevent from happening in the first place. Sifflet is a platform that brings your entire data stack into focus to improve the reliability of your data assets and empower collaboration across your teams. In this episode CEO and founder Salma Bakouk shares her views on the causes and impacts of "data entropy" and how you can tame it before it leads to failures.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their extensive library of integrations enable you to automatically send data to hundreds of downstream tools. Sign up free at dataengineeringpodcast.com/rudder Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. Your host is Tobias Macey and today I’m interviewing Salma Bakouk about achieving data reliability and reducing entropy within your data stack with sifflet
Interview
Introduction How did you get involved in the area of data management? Can you describe what Sifflet is and the st
Mark, Cris, and Marisa discuss the yield curve and give their latest recession odds. They also welcome back colleagues, Gaurav Ganguly, Chris Lafakis, and Bernard Yaros of Moody's Analytics, to examine the challenges of climate change and the impact on the U.S. economy. Full episode transcript Mark, Chris, and Bernard's paper, The Macroeconomic Cost of Climate Inaction Follow Mark Zandi @MarkZandi, Cris deRitis @MiddleWayEcon, and Marisa DiNatale on LinkedIn for additional insight
Questions or Comments, please email us at [email protected]. We would love to hear from you. To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.
We talked about:
Nikola’s background Making the first steps towards a transition to BI and Analytics Engineering Learning the skills necessary to transition to Analytics Engineering The in-between period – from Marketing to Analytics Engineering Nikola’s current responsibilities Understanding what a Data Model is Tools needed to work as an Analytics Engineer The Analytics Engineering role over time The importance of DBT for Analytics Engineers Where can one learn about data modeling theory? Going from Ancient Greek and Latin to understanding Data (Just-In-Time Learning) The importance of having domain knowledge to analytics engineering Suggestion for those wishing to transition into analytics engineering The importance of having a mentor when transitioning Finding a mentor Helpful newsletters and blogs Finding Nikola online
Links:
Nikola's LinkedIn account: https://www.linkedin.com/in/nikola-maksimovic-40188183/
ML Zoomcamp: https://github.com/alexeygrigorev/mlbookcamp-code/tree/master/course-zoomcamp
Join DataTalks.Club: https://datatalks.club/slack.html
Our events: https://datatalks.club/events.html
WARNING: This episode contains detailed discussion of data contracts. The modern data stack introduces challenges in terms of collaboration between data producers and consumers. How might we solve them to ultimately build trust in data quality? Chad Sanderson leads the data platform team at Convoy, a late-stage series-E freight technology startup. He manages everything from instrumentation and data ingestion to ETL, in addition to the metrics layer, experimentation software and ML. Prukalpa Sankar is a co-founder of Atlan, where she develops products that enable improved collaboration between diverse users like businesses, analysts, and engineers, creating higher efficiency and agility in data projects. For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com. The Analytics Engineering Podcast is sponsored by dbt Labs.
In this episode, Jason Foster is joined by Antje Bustamante, the Director of Data and Analytics at Zoopla. They talk about data transformation at Zoopla, the role data plays in the real estate industry, how to combine data and business goals and data literacy. Antje also shares her professional journey, interesting data use cases in the property industry and how the attitude towards data has changed at Zoopla over the past couple of years.
In today’s episode, we’re talking to Lenley Hensarling, Chief Product Officer at Aerospike, Inc. Aerospike is a real-time data platform that allows users to act in real time across billions of transactions while reducing their server footprint.
We talk about:
- Lenley’s background and the problems Aerospike solves.
- The particular domains and industries that benefit from this kind of technology.
- How the cloud has impacted what Aerospike does.
- Why some people might choose on-premise over the cloud.
- Finding the balance between customer-centric and market-centric.
- Balancing product management with tasks like customer interaction and engineering.
Lenley Hensarling - https://www.linkedin.com/in/lenleyhensarling/ Aerospike - https://www.linkedin.com/company/aerospike-inc-/
This episode is brought to you by Qrvey
The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.
Qrvey, the modern no-code analytics solution for SaaS companies on AWS.
saas #analytics #AWS #BI
To become a data-driven organization, it takes a major shift in mindset and culture, investments in technology and infrastructure, skills transformation, and clearly evangelizing the usefulness of using data to drive better decision-making.
With all of these levers to scale, many organizations get stuck early in their data transformation journey, not knowing what to prioritize and how. In this episode, Ganes Kesari joins the show to share the frameworks and processes that organizations can follow to become data-driven, measure their data maturity, and win stakeholder support across the organization.
Ganes is Co-Founder and Chief Decision Scientist at Gramener, which helps companies make data-driven decisions through powerful data stories and analytics. He is an expert in data, analytics, organizational strategy, and hands-on execution. Throughout his 20-year career, Ganes has become an internationally-renowned speaker and has been published in Forbes, Entrepreneur, and has become a thought leader in Data Science.
Throughout the episode, we talk about how organizations can scale their data maturity, how to build an effective data science roadmap, how to successfully navigate the skills and people components of data maturity, and much more.
Summary Building data products is an undertaking that has historically required substantial investments of time and talent. With the rise in cloud platforms and self-serve data technologies the barrier of entry is dropping. Shane Gibson co-founded AgileData to make analytics accessible to companies of all sizes. In this episode he explains the design of the platform and how it builds on agile development principles to help you focus on delivering value.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit dataengineeringpodcast.com/prefect. Data engineers don’t enjoy writing, maintaining, and modifying ETL pipelines all day, every day. Especially once they realize 90% of all major data sources like Google Analytics, Salesforce, Adwords, Facebook, Spreadsheets, etc., are already available as plug-and-play connectors with reliable, intuitive SaaS solutions. Hevo Data is a highly reliable and intuitive data pipeline platform used by data engineers from 40+ countries to set up and run low-latency ELT pipelines with zero maintenance. Boasting more than 150 out-of-the-box connectors that can be set up in minutes, Hevo also allows you to monitor and control your pipelines. You get: real-time data flow visibility, fail-safe mechanisms, and alerts if anything breaks; preload transformations and auto-schema mapping precisely control how data lands in your destination; models and workflows to transform data for analytics; and reverse-ETL capability to move the transformed data back to your business software to inspire timely action. All of this, plus its transparent pricing and 24*7 live support, makes it consistently voted by users as the Leader in the Data Pipeline category on review platforms like G2. Go to dataengineeringpodcast.com/hevodata and sign up for a free 14-day trial that also comes with 24×7 support. Your host is Tobias Macey and today I’m interviewing Shane Gibson about AgileData
Summary CreditKarma builds data products that help consumers take advantage of their credit and financial capabilities. To make that possible they need a reliable data platform that empowers all of the organization’s stakeholders. In this episode Vishnu Venkataraman shares the journey that he and his team have taken to build and evolve their systems and improve the product offerings that they are able to support.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their extensive library of integrations enable you to automatically send data to hundreds of downstream tools. Sign up free at dataengineeringpodcast.com/rudder Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. Your host is Tobias Macey and today I’m interviewing Vishnu Venkataraman about building the data platform at CreditKarma and the forces that shaped the design
Interview
Introduction How did you get involved in the area of data management? Can you describe what CreditKarma is and the role
Mark and Cris break down the October CPI Report and the latest on inflation. Colleagues, Dan White and Emily Mandel of Moody's Analytics, join the podcast to give a rundown of the midterm election results and the economic implications. Full episode transcript Follow Mark Zandi @MarkZandi, Cris deRitis @MiddleWayEcon, and Marisa DiNatale on LinkedIn for additional insight
Questions or Comments, please email us at [email protected]. We would love to hear from you. To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.
Leverage the full power of Bayesian analysis for competitive advantage Bayesian methods can solve problems you can't reliably handle any other way. Building on your existing Excel analytics skills and experience, Microsoft Excel MVP Conrad Carlberg helps you make the most of Excel's Bayesian capabilities and move toward R to do even more. Step by step, with real-world examples, Carlberg shows you how to use Bayesian analytics to solve a wide array of real problems. Carlberg clarifies terminology that often bewilders analysts, provides downloadable Excel workbooks you can easily adapt to your own needs, and offers sample R code to take advantage of the rethinking package in R and its gateway to Stan. As you incorporate these Bayesian approaches into your analytical toolbox, you'll build a powerful competitive advantage for your organization---and yourself. Explore key ideas and strategies that underlie Bayesian analysis Distinguish prior, likelihood, and posterior distributions, and compare algorithms for driving sampling inputs Use grid approximation to solve simple univariate problems, and understand its limits as parameters increase Perform complex simulations and regressions with quadratic approximation and Richard McElreath's quap function Manage text values as if they were numeric Learn today's gold-standard Bayesian sampling technique: Markov Chain Monte Carlo (MCMC) Use MCMC to optimize execution speed in high-complexity problems Discover when frequentist methods fail and Bayesian methods are essential---and when to use both in tandem ...
Why is Google Analytics 4 the most modern data model available for digital marketing analytics? Rather than simply reporting what has happened, GA4's new cloud integrations enable more data activation, linking online and offline data across all your streams to provide end-to-end marketing data. This practical book prepares you for the future of digital marketing by demonstrating how GA4 supports these additional cloud integrations. Author Mark Edmondson, Google developer expert for Google Analytics and Google Cloud, provides a concise yet comprehensive overview of GA4 and its cloud integrations. Data, business, and marketing analysts will learn major facets of GA4's powerful new analytics model, with topics including data architecture and strategy, and data ingestion, storage, and modeling. You'll explore common data activation use cases and get the guidance you need to implement them. You'll learn: How Google Cloud integrates with GA4 The potential use cases that GA4 integrations can enable Skills and resources needed to create GA4 integrations How much GA4 data capture is necessary to enable use cases The process of designing dataflows from strategy through data storage, modeling, and activation How to adapt the use cases to fit your business needs
Send us a text Datatopics is a podcast presented by Kevin Missoorten to talk about the fuzzy and misunderstood concepts in the world of data, analytics, and AI. In this second episode of a mini-series on AI Ethics, we zoom in on the role of regulation and legislation to ensure AI remains ethical, addressing complex questions such as: What is the role of legislation with regard to AI Ethics? Aren’t we going to make things more complex and slow down development?Is legislation not lagging behind? And how do we reconcile all the regional variations?The unexpected benefits of creating a “technical person” as a new form of "legal person".In this episode, Virginie, Tim, Rob Heyman and I tackle these and many more questions with the expert in the room Jan de Bruyne!
Datatopics is brought to you by Dataroots Music: The Gentlemen - DivKidThe thumbnail is generated by Midjourney
On today’s episode, we’re talking to Gautam Ijoor, President and CEO of Alpha Omega Integration, a company that creates new possibilities through intelligent end-to-end mission-focused government IT solutions.
We talk about:
- Gautam’s background and his entrepreneurial journey.
- How Alpha Omega works and the areas they focus on.
- How Gautam sees SaaS in relation to government.
- Are concerns about putting data in the cloud over, or is there still work to do?
- The potential for SaaS companies in the federal contracting space.
- The importance of ease of use in SaaS.
- The drawbacks of subscription services for governments.
This episode is brought to you by Qrvey
The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.
Qrvey, the modern no-code analytics solution for SaaS companies on AWS.