talk-data.com talk-data.com

Topic

SaaS

Software as a Service (SaaS)

cloud_computing software_delivery subscription

310

tagged

Activity Trend

23 peak/qtr
2020-Q1 2026-Q1

Activities

310 activities · Newest first

Summary

All software systems are in a constant state of evolution. This makes it impossible to select a truly future-proof technology stack for your data platform, making an eventual migration inevitable. In this episode Gleb Mezhanskiy and Rob Goretsky share their experiences leading various data platform migrations, and the hard-won lessons that they learned so that you don't have to.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack Modern data teams are using Hex to 10x their data impact. Hex combines a notebook style UI with an interactive report builder. This allows data teams to both dive deep to find insights and then share their work in an easy-to-read format to the whole org. In Hex you can use SQL, Python, R, and no-code visualization together to explore, transform, and model data. Hex also has AI built directly into the workflow to help you generate, edit, explain and document your code. The best data teams in the world such as the ones at Notion, AngelList, and Anthropic use Hex for ad hoc investigations, creating machine learning models, and building operational dashboards for the rest of their company. Hex makes it easy for data analysts and data scientists to collaborate together and produce work that has an impact. Make your data team unstoppable with Hex. Sign up today at dataengineeringpodcast.com/hex to get a 30-day free trial for your team! Your host is Tobias Macey and today I'm interviewing Gleb Mezhanskiy and Rob Goretsky about when and how to think about migrating your data stack

Interview

Introduction How did you get involved in the area of data management? A migration can be anything from a minor task to a major undertaking. Can you start by describing what constitutes a migration for the purposes of this conversation? Is it possible to completely avoid having to invest in a migration? What are the signals that point to the need for a migration?

What are some of the sources of cost that need to be accounted for when considering a migration? (both in terms of doing one, and the costs of not doing one) What are some signals that a migration is not the right solution for a perceived problem?

Once the decision has been made that a migration is necessary, what are the questions that the team should be asking to determine the technologies to move to and the sequencing of execution? What are the preceding tasks that should be completed before starting the migration to ensure there is no breakage downstream of the changing component(s)? What are some of the ways that a migration effort might fail? What are the major pitfalls that teams need to be aware of as they work through a data platform migration? What are the opportunities for automation during the migration process? What are the most interesting, innovative, or unexpected ways that you have seen teams approach a platform migration? What are the most interesting, unexpected, or challenging lessons that you have learned while working on data platform migrations? What are some ways that the technologies and patterns that we use can be evolved to reduce the cost/impact/need for migraitons?

Contact Info

Gleb

LinkedIn @glebmm on Twitter

Rob

LinkedIn RobGoretsky on GitHub

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Datafold

Podcast Episode

Informatica Airflow Snowflake

Podcast Episode

Redshift Eventbrite Teradata BigQuery Trino EMR == Elastic Map-Reduce Shadow IT

Podcast Episode

Mode Analytics Looker Sunk Cost Fallacy data-diff

Podcast Episode

SQLGlot Dagster dbt

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Sponsored By: Hex: Hex Tech Logo

Hex is a collaborative workspace for data science and analytics. A single place for teams to explore, transform, and visualize data into beautiful interactive reports. Use SQL, Python, R, no-code and AI to find and share insights across your organization. Empower everyone in an organization to make an impact with data. Sign up today at [dataengineeringpodcast.com/hex](https://www.dataengineeringpodcast.com/hex} and get 30 days free!Rudderstack: Rudderstack

Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstackSupport Data Engineering Podcast

Daniel Le is the CFO at dbt Labs where he has built multiple teams. He is also the former head of FP&A and operations at Zoom, and he helped scale FP&A as the former finance director at Okta.  In this conversation with Julia, Daniel shares his view as CFO on the challenges SaaS companies face and the importance of finance teams creating a holistic view of their business. Daniel gives advice to data leaders about how they can automate business processes with dbt Cloud and use self-service analytics to automate revenue recognition, generate consistent headcount analytics, and more to impact their organization. Read more about Daniel's story here. For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com.  The Analytics Engineering Podcast is sponsored by dbt Labs.

Sponsored by: Avanade | Accelerating Adoption of Modern Analytics and Governance at Scale

To unlock all the competitive advantage Databricks offers your organization, you might need to update your strategy and methodology for the platform. With over 1,000+ Databricks projects completed globally in the last 18 months, we are going to share our insights on the best building blocks to target as you search for efficiency and competitive advantage.

These building blocks supporting this include enterprise metadata and data management services, data management foundation, and data services and products that enable business units to fully use their data and analytics at scale.

In this session, Avanade data leaders will highlight how Databricks’ modern data stack fits Azure PaaS and SaaS (such as Microsoft Fabric) ecosystem, how Unity catalog metadata supports automated data operations scenarios, and how we are helping clients measure modern analytics and governance business impact and value.

Talk by: Alan Grogan and Timur Mehmedbasic

Here’s more to explore: State of Data + AI Report: https://dbricks.co/44i2HBp Databricks named a Leader in 2022 Gartner® Magic QuadrantTM CDBMS: https://dbricks.co/3phw20d

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

Sponsored: Matillion - OurFamilyWizard Moves and Transforms Data for Databricks Delta Lake Easy

OurFamilyWizard helps families living separately thrive, empowering parents with needed tools after divorce or separation. Migrating to a modern data stack built on a Databricks Delta Lake seemed like the obvious choice for OurFamilyWizard to start integrating 20 years of on-prem Oracle data with event tracking and SaaS cloud data, but they needed tools to do it. OurFamilyWizard turned to Matillion, a powerful and intuitive solution, to quickly load, combine, and transform source data into reporting tables and data marts, and empower them to turn raw data into information the organization can use to make decisions.

In this session, Beth Mattson, OurFamilyWizard Senior Data Engineer, will detail how Matillion helped OurFamilyWizard migrate their data to Databricks fast and provided end-to-end ETL capabilities. In addition, Jamie Baker, Matillion Director of Product Management, will give a brief demo and discuss the Matillion and Databricks partnership and what is on the horizon.

Talk by: Jamie Baker and Beth Mattson

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

Building Apps on the Lakehouse with Databricks SQL

BI applications are undoubtedly one of the major consumers of a data warehouse. Nevertheless, the prospect of accessing data using standard SQL is appealing to many more stakeholders than just the data analysts. We’ve heard from customers that they experience an increasing demand to provide access to data in their lakehouse platforms from external applications beyond BI, such as e-commerce platforms, CRM systems, SaaS applications, or custom data applications developed in-house. These applications require an “always on” experience, which makes Databricks SQL Serverless a great fit.

In this session, we give an overview of the approaches available to application developers to connect to Databricks SQL and create modern data applications tailored to needs of users across an entire organization. We discuss when to choose one of the Databricks native client libraries for languages such as Python, Go, or node.js and when to use the SQL Statement Execution API, the newest addition to the toolset. We also explain when ODBC and JDBC might not be the best for the task and when they are your best friends. Live demos are included.

Talk by: Adriana Ispas and Chris Stevens

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

On today’s episode, we’re joined by Jan Arendtsz, CEO & Founder, Celigo, a leading integration platform as a service (iPaaS) company.

We talk about: Increasing efficiency with workflow automations Powerful use cases for iPaaSUsing AI to make better SaaS products that help users be more productiveEmpowering business users to automate their underlying business processes

Summary

Real-time data processing has steadily been gaining adoption due to advances in the accessibility of the technologies involved. Despite that, it is still a complex set of capabilities. To bring streaming data in reach of application engineers Matteo Pelati helped to create Dozer. In this episode he explains how investing in high performance and operationally simplified streaming with a familiar API can yield significant benefits for software and data teams together.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack Modern data teams are using Hex to 10x their data impact. Hex combines a notebook style UI with an interactive report builder. This allows data teams to both dive deep to find insights and then share their work in an easy-to-read format to the whole org. In Hex you can use SQL, Python, R, and no-code visualization together to explore, transform, and model data. Hex also has AI built directly into the workflow to help you generate, edit, explain and document your code. The best data teams in the world such as the ones at Notion, AngelList, and Anthropic use Hex for ad hoc investigations, creating machine learning models, and building operational dashboards for the rest of their company. Hex makes it easy for data analysts and data scientists to collaborate together and produce work that has an impact. Make your data team unstoppable with Hex. Sign up today at dataengineeringpodcast.com/hex to get a 30-day free trial for your team! Your host is Tobias Macey and today I'm interviewing Matteo Pelati about Dozer, an open source engine that includes data ingestion, transformation, and API generation for real-time sources

Interview

Introduction How did you get involved in the area of data management? Can you describe what Dozer is and the story behind it?

What was your decision process for building Dozer as open source?

As you note in the documentation, Dozer has overlap with a number of technologies that are aimed at different use cases. What was missing from each of them and the center of their Venn diagram that prompted you to build Dozer? In addition to working in an interesting technological cross-section, you are also targeting a disparate group of personas. Who are you building Dozer for and what were the motivations for that vision?

What are the different use cases that you are focused on supporting? What are the features of Dozer that enable engineers to address those uses, and what makes it preferable to existing alternative approaches?

Can you describe how Dozer is implemented?

How have the design and goals of the platform changed since you first started working on it? What are the architectural "-ilities" that you are trying to optimize for?

What is involved in getting Dozer deployed and integrated into an existing application/data infrastructure? How can teams who are using Dozer extend/integrate with Dozer?

What does the development/deployment workflow look like for teams who are building on top of Dozer?

What is your governance model for Dozer and balancing the open source project against your business goals? What are the most interesting, innovative, or unexpected ways that you have seen Dozer used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Dozer? When is Dozer the wrong choice? What do you have planned for the future of Dozer?

Contact Info

LinkedIn @pelatimtt on Twitter

Parting Question

From your perspective, what is the bigge

On today’s episode, we’re joined by Jason Radisson, CEO and founder at Movo, the AI-powered HR platform that automates the expensive, manual and time-consuming process of acquiring, engaging, and upskilling a high-volume frontline workforce.  We talk about:

How AI can help with workplace flexibility in the gig economyReducing HR’s paperwork burden, so they can focus on meaningful interactions with candidates The implications of AI making software engineers 10X more productiveThe coming Cambrian explosion of SaaS companies…. & what curbs enthusiasm

Summary

Data has been one of the most substantial drivers of business and economic value for the past few decades. Bob Muglia has had a front-row seat to many of the major shifts driven by technology over his career. In his recent book "Datapreneurs" he reflects on the people and businesses that he has known and worked with and how they relied on data to deliver valuable services and drive meaningful change.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack Your host is Tobias Macey and today I'm interviewing Bob Muglia about his recent book about the idea of "Datapreneurs" and the role of data in the modern economy

Interview

Introduction How did you get involved in the area of data management? Can you describe what your concept of a "Datapreneur" is?

How is this distinct from the common idea of an entreprenur?

What do you see as the key inflection points in data technologies and their impacts on business capabilities over the past ~30 years? In your role as the CEO of Snowflake you had a first-row seat for the rise of the "modern data stack". What do you see as the main positive and negative impacts of that paradigm?

What are the key issues that are yet to be solved in that ecosmnjjystem?

For technologists who are thinking about launching new ventures, what are the key pieces of advice that you would like to share? What do you see as the short/medium/long-term impact of AI on the technical, business, and societal arenas? What are the most interesting, innovative, or unexpected ways that you have seen business leaders use data to drive their vision? What are the most interesting, unexpected, or challenging lessons that you have learned while working on the Datapreneurs book? What are your key predictions for the future impact of data on the technical/economic/business landscapes?

Contact Info

LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Datapreneurs Book SQL Server Snowflake Z80 Processor Navigational Database System R Redshift Microsoft Fabric Databricks Looker Fivetran

Podcast Episode

Databricks Unity Catalog RelationalAI 6th Normal Form Pinecone Vector DB

Podcast Episode

Perplexity AI

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Sponsored By: Rudderstack: Rudderstack

Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstackSupport Data Engineering Podcast

Summary

For business analytics the way that you model the data in your warehouse has a lasting impact on what types of questions can be answered quickly and easily. The major strategies in use today were created decades ago when the software and hardware for warehouse databases were far more constrained. In this episode Maxime Beauchemin of Airflow and Superset fame shares his vision for the entity-centric data model and how you can incorporate it into your own warehouse design.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack Your host is Tobias Macey and today I'm interviewing Max Beauchemin about the concept of entity-centric data modeling for analytical use cases

Interview

Introduction How did you get involved in the area of data management? Can you describe what entity-centric modeling (ECM) is and the story behind it?

How does it compare to dimensional modeling strategies? What are some of the other competing methods Comparison to activity schema

What impact does this have on ML teams? (e.g. feature engineering)

What role does the tooling of a team have in the ways that they end up thinking about modeling? (e.g. dbt vs. informatica vs. ETL scripts, etc.)

What is the impact on the underlying compute engine on the modeling strategies used?

What are some examples of data sources or problem domains for which this approach is well suited?

What are some cases where entity centric modeling techniques might be counterproductive?

What are the ways that the benefits of ECM manifest in use cases that are down-stream from the warehouse?

What are some concrete tactical steps that teams should be thinking about to implement a workable domain model using entity-centric principles?

How does this work across business domains within a given organization (especially at "enterprise" scale)?

What are the most interesting, innovative, or unexpected ways that you have seen ECM used?

What are the most interesting, unexpected, or challenging lessons that you have learned while working on ECM?

When is ECM the wrong choice?

What are your predictions for the future direction/adoption of ECM or other modeling techniques?

Contact Info

mistercrunch on GitHub LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Entity Centric Modeling Blog Post Max's Previous Apperances

Defining Data Engineering with Maxime Beauchemin Self Service Data Exploration And Dashboarding With Superset Exploring The Evolving Role Of Data Engineers Alumni Of AirBnB's Early Years Reflect On What They Learned About Building Data Driven Organizations

Apache Airflow Apache Superset Preset Ubisoft Ralph Kimball The Rise Of The Data Engineer The Downfall Of The Data Engineer The Rise Of The Data Scientist Dimensional Data Modeling Star Schema Databas

On today’s episode, we’re joined by Seena Mortazavi, CEO, Chronus, the leading enterprise mentoring platform.

We talk about:  Differences between being a VC-backed versus bootstrapping for SaaS companiesThe best time to bring in an investorFocusing on lead generation versus product in early stage start-upsManaging a global, dispersed team

Summary

Feature engineering is a crucial aspect of the machine learning workflow. To make that possible, there are a number of technical and procedural capabilities that must be in place first. In this episode Razi Raziuddin shares how data engineering teams can support the machine learning workflow through the development and support of systems that empower data scientists and ML engineers to build and maintain their own features.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack Your host is Tobias Macey and today I'm interviewing Razi Raziuddin about how data engineers can empower data scientists to develop and deploy better ML models through feature engineering

Interview

Introduction How did you get involved in the area of data management? What is feature engineering is and why/to whom it matters?

A topic that commonly comes up in relation to feature engineering is the importance of a feature store. What are the tradeoffs for that to be a separate infrastructure/architecture component?

What is the overall lifecycle of a feature, from definition to deployment and maintenance?

How is this distinct from other forms of data pipeline development and delivery? Who are the participants in that workflow?

What are the sharp edges/roadblocks that typically manifest in that lifecycle? What are the interfaces that are needed for data scientists/ML engineers to be able to self-serve their feature management?

What is the role of the data engineer in supporting those interfaces? What are the communication/collaboration channels that are necessary to make the overall process a success?

From an implementation/architecture perspective, what are the patterns that you have seen teams build around for feature development/serving? What are the most interesting, innovative, or unexpected ways that you have seen feature platforms used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on feature engineering? What are the resources that you find most helpful in understanding and designing feature platforms?

Contact Info

LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

FeatureByte DataRobot Feature Store Feast Feature Store Feathr Kaggle Yann LeCun

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Sponsored By: Rudderstack: Rudderstack

Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations fo

We talked about:

Simon's background What MLOps is and what it isn't Skills needed to build an ML platform that serves 100s of models Ranking the importance of skills The point where you should think about building an ML platform The importance of processes in ML platforms Weighing your options with SaaS platforms The exploratory setup, experiment tracking, and model registry What comes after deployment? Stitching tools together to create an ML platform Keeping data governance in mind when building a platform What comes first – the model or the platform? Do MLOps engineers need to have deep knowledge of how models work? Is API design important for MLOps? Simon's recommendations for furthering MLOps knowledge

Links:

LinkedIn: https://www.linkedin.com/in/simonstiebellehner/ Github: https://github.com/stiebels Medium: https://medium.com/@sistel

Free MLOps course: https://github.com/DataTalksClub/mlops-zoomcamp

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

Today I’m continuing my conversation with Nadiem von Heydebrand, CEO of Mindfuel. In the conclusion of this special 2-part episode, Nadiem and I discuss the role of a Data Product Manager in depth. Nadiem reveals which fields data product managers are currently coming from, and how a new data product manager with a non-technical background can set themselves up for success in this new role. He also walks through his portfolio approach to data product management, and how to prioritize use cases when taking on a data product management role. Toward the end, Nadiem also shares personal examples of how he’s employed these strategies, why he feels it’s so important for engineers to be able to see and understand the impact of their work, and best practices around developing a data product team. 

Highlights / Skip to:

Brian introduces Nadiem and gives context for why the conversation with Nadiem led to a two-part episode (00:35) Nadiem summarizes his thoughts on data product management and adds context on which fields he sees data product managers currently coming from (01:46) Nadiem’s take on whether job listings for data product manager roles still have too many technical requirements (04:27) Why some non-technical people fail when they transition to a data product manager role and the ways Nadiem feels they can bolster their chances of success (07:09) Brian and Nadiem talk about their views on functional data product team models and the process for developing a data product as a team (10:11) When Nadiem feels it makes sense to hire a data product manager and adopt a portfolio view of your data products (16:22) Nadiem’s view on how to prioritize projects as a new data product manager (19:48) Nadiem shares a story of when he took on an interim role as a head of data and how he employed the portfolio strategies he recommends (24:54) How Nadiem evaluates perceived usability of a data product when picking use cases (27:28) Nadiem explains why understanding go-to-market strategy is so critical as a data product manager (30:00) Brian and Nadiem discuss the importance of today’s engineering teams understanding the value and impact of their work (32:09) How Nadiem and his team came up with the idea to develop a SaaS product for data product managers (34:40)

Quotes from Today’s Episode “So, data product management [...] is a combination of different capabilities [...]  [including] product management, design, data science, and machine learning. We covered this in viability, desirability, feasibility, and datability. So, these are four dimensions [that] you combine [...] together to become a data product manager.” — Nadiem von Heydebrand (02:34)

“There is no education for data product management today, there’s no university degree. ... So, there’s nobody out there—from my perspective—who really has all the four dimensions from day one. It’s more like an evolution: you’re coming from one of the [parallel business] domains or from one of the [parallel business] fields and then you extend your skill set over time.” — Nadiem von Heydebrand (03:04)

“If a product manager has very good communication skills and is able to break down the needs in a proper way or in a good understandable way to its tech lead, or its engineering lead or data science lead, then I think it works out super well. If this bridge is missing, then it becomes a little bit tricky because then the distance between the product manager and the development team is too far.” – Nadiem von Heydebrand (09:10)

“I think every data leader out there has an Excel spreadsheet or a list of prioritized use cases or the most relevant use cases for the business strategy… You can think about this list as a portfolio. You know, some of these use cases are super valuable; some of these use cases maybe will not work out, and you have to identify those which are bringing real return on investment when you put effort in there.” – Nadiem von Heydebrand (19:01)

“I’m not a magician for data product management. I just focused on a very strategic view on my portfolio and tried to identify those cases and those data products where I can believe I can easily develop them, I have a high degree of adoption with my lines of business, and I can truly measure the added revenue and the impact.” – Nadiem von Heydebrand (26:31)

“As a true data product manager, from my point of view, you are someone who is empathetic for the lines of businesses, to understand what their underlying needs and what the problems are. At the same time, you are a business person. You try to optimize the portfolio for your own needs, because you have business goals coming from your leadership team, from your head of data, or even from the person above, the CTO, CIO, even CEO. So, you want to make sure that your value contribution is always transparent, and visible, measurable, tangible.” – Nadiem von Heydebrand (29:20)

“If we look into classical product management, I mean, the product manager has to understand how to market and how to go to the market. And it’s this exactly the same situation with data product managers within your organization. You are as successful as your product performs in the market. This is how you measure yourself as a data product manager. This is how you define success for yourself.” – Nadiem von Heydebrand (30:58)

Links Mindfuel: https://mindfuel.ai/ LinkedIn: https://www.linkedin.com/in/nadiemvh/ Delight Software - the SAAS tool for data product managers to manage their portfolio of data products: https://delight.mindfuel.ai

Send us a text Want to be featured as a guest on Making Data Simple? Reach out to us at [[email protected]] and tell us why you should be next.

Abstract Making Data Simple Podcast is hosted by Al Martin, VP, IBM Expert Services Delivery, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. This week on Making Data Simple, we have Benn Stancil, Chief Analytics Officer + Founder @ Mode. Benn is an accomplished data analyst with deep expertise in collaborative Business Intelligence and Interactive Data Science. Benn is Co-founder, President, and Chief  Analytics Officer of Mode, an award-winning SaaS company that combines the best elements of Business Intelligence (ABI), Data Science (DS) and Machine Learning (ML) to empower data teams to answer impactful questions and collaborate on analysis across a range of business functions. Under Benn’s leadership, the Mode platform has evolved to enable data teams to explore, visualize, analyze and share data in a powerful end-to-end workflow. Prior to founding Mode, Benn served in senior Analytics positions at Microsoft and Yammer, and worked as a  researcher for the International Economics Program at the Carnegie Endowment for International Peace. Benn also served as an Undergraduate Research Fellow at Wake Forest University,  where he received his B.S. in Mathematics and Economics. Benn believes in fostering a shared sense of humility and gratitude.

Show Notes 1:22 – Benn’s history7:09 – Tell us how you got to where you are today9:14 – Tell us about Mode12:08 – What is your definition of the Chief Analytics Officer?21:53 – Why do we need another BI tool?24:09 – What’s your secret sauce?27:48 – Where did the name Mode come from?28:41 – How do we use Mode?31:08 – What is you goto market strategy? 32:38 – Any client references?34:58 – “The missing piece in the modern data stack” tell us about thisMode  Email: [email protected] [email protected] Twitter: benn stancil Connect with the Team Producer Kate Brown - LinkedIn. Host Al Martin - LinkedIn and Twitter.  Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Power BI Machine Learning and OpenAI

Microsoft Power BI Machine Learning and OpenAI offers a comprehensive exploration into advanced data analytics and artificial intelligence using Microsoft Power BI. Through hands-on, workshop-style examples, readers will discover the integration of machine learning models and OpenAI features to enhance business intelligence. This book provides practical examples, real-world scenarios, and step-by-step guidance. What this Book will help me do Learn to apply machine learning capabilities within Power BI to create predictive analytics Understand how to integrate OpenAI services to build enhanced analytics workflows Gain hands-on experience in using R and Python for advanced data visualization in Power BI Master the skills needed to build and deploy SaaS auto ML models within Power BI Leverage Power BI's AI visuals and features to elevate data storytelling Author(s) Greg Beaumont, an expert in data science and business intelligence, brings years of experience in Power BI and analytics to this book. With a focus on practical applications, Greg empowers readers to harness the power of AI and machine learning to elevate their data solutions. As a consultant and trainer, he shares his deep knowledge to help readers unlock the full potential of their tools. Who is it for? This book is ideal for data analysts, BI professionals, and data scientists who aim to integrate machine learning and OpenAI into their workflows. If you're familiar with Power BI's fundamentals and are eager to explore its advanced capabilities, this guide is tailored for you. Perfect for professionals looking to elevate their analytics to a new level, combining data science concepts with Power BI's features.

Building a better world with AI, one architectural drawing at a time | mbue

ABOUT THE TALK: Mbue uses advanced computer vision and NLP technologies to read and understand architectural and technical drawings, catch flaws and other mistakes that cause delays and costly fixes, and ultimately automate the quality control process. Learn how they approach this complex problem, what they are doing to solve it, and where we're going next.

ABOUT THE SPEAKERS: Jean-Pierre Trou is the CEO and co-founder of mbue, a SaaS AI-First company focused on saving time, money and reducing liability for Architecture, Engineering and Construction (AEC) companies with automated quality control tools. mbue's web-based application utilizes Artificial Intelligence to instantly review technical drawings. Think “autocorrect” for construction documents. Jean Pierre is also the Founding Principal of Runa Workshop, Architecture and Interior design firm, and Founding Partner at Vaast, a real estate company, all based in Austin, Texas.

Ron Green is a serial tech entrepreneur and expert in artificial intelligence. Ron is co-founder and CTO of mbue and also co-founded KUNGFU.AI, an AI consultancy that helps companies build and deploy AI and machine learning solutions. Prior to KUNGFU.AI, Ron was CEO and founder of Thrive Technologies (acquired by CLOUD), ran software development at Ziften Technologies, Powered (acquired by Dachis Group), and Visible Genetics (acquired by Bayer).

ABOUT DATA COUNCIL: Data Council (https://www.datacouncil.ai/) is a community and conference series that provides data professionals with the learning and networking opportunities they need to grow their careers.

Make sure to subscribe to our channel for the most up-to-date talks from technical professionals on data related topics including data infrastructure, data engineering, ML systems, analytics and AI from top startups and tech companies.

FOLLOW DATA COUNCIL: Twitter: https://twitter.com/DataCouncilAI LinkedIn: https://www.linkedin.com/company/datacouncil-ai/

Latency is the Mind Killer. It's Time to Reimagine Data Interactions | Hunch

ABOUT THE TALK: Billions spent on data have one goal: better decisions, faster. Yet, with most tools echoing Tableau and Excel, are we really unlocking data's potential? What might a more productive—and playful—data experience look like?

Hunch is crafting an ambitious user interface for data that's more visual, spatial, and fluid—a shared canvas for both analysts and business users to make faster decisions together. Tune in to the discussion about why it’s time to reimagine how we engage with data, and Hunch's journey exploring the ideas that led them to now and why some work better than others.

ABOUT THE SPEAKER: David Wilson has worked with large datasets for companies across Africa and the Middle East.

David co-founded Cape Networks, combining his data and design experience to craft a SaaS tool that helped non-experts make sense of complex networking data. Aruba Networks acquired Cape to be the face of their cloud software portfolio.

Drawing from his experience at Cape, David co-founded Hunch—a data tool he’s dreamed of since his consulting days.

ABOUT DATA COUNCIL: Data Council (https://www.datacouncil.ai/) is a community and conference series that provides data professionals with the learning and networking opportunities they need to grow their careers.

Make sure to subscribe to our channel for the most up-to-date talks from technical professionals on data related topics including data infrastructure, data engineering, ML systems, analytics and AI from top startups and tech companies.

FOLLOW DATA COUNCIL: Twitter: https://twitter.com/DataCouncilAI LinkedIn: https://www.linkedin.com/company/datacouncil-ai/

The End of History? Convergence of Batch and Realtime Data Technologies | Ternary Data

ABOUT THE TALK: Hybridization approaches such as Lambda and Kappa architecture are powerful tools for combining the most useful characteristics of batch and real time systems. Implementation and management of these architectures is not for the faint of heart, but the last several years have seen a wave of SaaS platforms and managed services that deliver hybrid capabilities with a greatly reduced operational burden. This talk details the anticipated impact of these hybrid technologies on future data stacks, and touches on the mythical “one database to rule them all.”

ABOUT THE SPEAKER: Matt Housley holds a Ph.D. in mathematics and is co-author of the bestselling O’Reilly book Fundamentals of Data Engineering.

ABOUT DATA COUNCIL: Data Council (https://www.datacouncil.ai/) is a community and conference series that provides data professionals with the learning and networking opportunities they need to grow their careers.

Make sure to subscribe to our channel for the most up-to-date talks from technical professionals on data related topics including data infrastructure, data engineering, ML systems, analytics and AI from top startups and tech companies.

FOLLOW DATA COUNCIL: Twitter: https://twitter.com/DataCouncilAI LinkedIn: https://www.linkedin.com/company/datacouncil-ai/

On today’s episode, we’re joined by Michael Keeler, Chairman and CEO, LeaseAccelerator, a SaaS platform that helps companies automate the entire life cycle for leases, both real estate and equipment. We talk about:  Managing the product roadmap scientifically & deciding where to investListening to customers in structured waysQs Michael asks LeaseAccelerator’s top clients a couple times every yearDelivering what customers want without bloatware heavinessThe way to think about growing a SaaS business over time