talk-data.com talk-data.com

Topic

Snowflake

data_warehouse cloud analytics olap

550

tagged

Activity Trend

193 peak/qtr
2020-Q1 2026-Q1

Activities

550 activities · Newest first

Using data pipeline contract to prevent breakage in analytics reporting - Coalesce 2023

It’s 2023, why are software engineers still breaking analytics reporting? We’ve all been there, being alerted by an analyst or C-level stakeholders, saying “this report is broken”, only to spend hours determining that an engineer deleted a column on the source database that is now breaking your pipeline and reporting.

At Xometry, the data engineering team wanted to fix this problem at its root and give the engineering teams a clear and repeatable process that allowed them to be the owners of their own database data. Xometry named the process DPICT (data pipeline contract) and built several internal tools that integrated seamlessly with their developer’s microservice toolsets.

Their software engineers mostly build their database microservices using Postgres, and bring in the data using Fivetran. Using that as the baseline, the team created a set of tools that would allow the engineers to quickly build the staging layer of their database in the data warehouse (Snowflake), but also alert them of the consequences of removing a table or column in downstream reporting.

In this talk, Jisan shares the nuts and bolts of the designed solution and process that allowed the team to onboard 13 different microservices seamlessly, working with multiple domains and dozens of developers. The process also helped software engineers to own their own data and realize their impact. The team has saved hundreds hours of data engineering time and resources not having to chase down what changed upstream to break data. Overall, this process has helped to bring transparency to the whole data ecosystem.

Speaker: Jisan Zaman, Data Engineering Manager, Xometry

Register for Coalesce at https://coalesce.getdbt.com

Need for speed (and less spending): The story of finance data at Snowflake - Coalesce 2023

Take a look at how the finance data team at Snowflake leverages dbt to drive strategic decision-making within the finance and accounting organizations. This talk spans topics including development velocity, cost governance, and data stewardship.

Speakers: Sandra Herchen, Analytics Engineering Manager, Snowflake; Jack Peele, Business Intelligence Analyst, Snowflake

Register for Coalesce at https://coalesce.getdbt.com

60 sources and counting: Unlocking microservice integration with dbt and Data Vault - Coalesce 2023

The Guild team migrated to Snowflake and dbt for their data warehousing needs and immediately saw the benefits of standardizing model structure, DRYer logic, data lineage ,and automated testing on Pull Requests.

But leveraging dbt didn’t solve everything. Pain points around maintaining model logic, handling historical data, and integrating data from over 60 source systems meant that analysts still struggled to provide a unified view of the business. The team knew that they needed to level up their processes and modeling again, and chose to adopt Data Vault (DV).

Brandon and Rebecca take you behind the scenes of this decision to explain the benefits of Data Vault. They highlight DV’s ability to handle complex data integration requirements while remaining agile and demonstrate that it complements other modern data concepts like domain-driven design and data mesh.

Attendees learn what Data Vault is, when it can be a key component of a successful data strategy, and instances where it’s not the right fit. Walk away with practical tips to successfully transition based on a real-world implementation.

Guild transformed their data warehouse; you can too!

Speakers: Brandon Taylor, Senior Data Architect, Guild; Rebecca Di Bari Staff Data Engineer , Guild

Register for Coalesce at https://coalesce.getdbt.com

Notion’s blueprint for adapting data science models to changing sales processes - Coalesce 2023

Prioritizing the right sales opportunities is pivotal for any SaaS company's growth, but what happens after your initial success? Jessica Zhang, Data Science Manager at Notion, traces Notion's footsteps from its foundational days to its present-day lead scoring techniques. Learn how modern tools like dbt, Census, and Snowflake enable the Notion team to iterate quickly. More than a journey, this session is a lesson on evolving a data science model in response to changing business assumptions and fresh user insights.

Speakers: Jessica Zhang, Data Science Manager, Notion; Jeff Sloan, Sr. Data Community Advocate, Census

Register for Coalesce at https://coalesce.getdbt.com

Enhancing the developer experience with the power of Snowflake and dbt - Coalesce 2023

In the rapidly evolving landscape of data technology, the integration of Snowflake and dbt has revolutionized the creation and management of data applications. Now, developers can harness their combined capabilities to build superior, scalable, and sophisticated data applications.

With Snowflake’s cloud-based architecture, developers can access boundless storage, computing, and seamless data sharing. Additionally, Snowpark Python enables the performance of data transformation, analytics, and algorithmic functions within Snowflake, presenting developers with a new realm of opportunities. Incorporating dbt further enhances the synergy, allowing developers to streamline data workflows in an agile, model-driven environment.

This session covers how the Snowflake and dbt partnership can pave the way toward building better, future-proof data applications that cater to the dynamic needs of businesses in the digital era.

Speaker: Tarik Dwiek, Head of Technology and Application Partners, Snowflake

Register for Coalesce at https://coalesce.getdbt.com

A complete beginner's guide to Snowpark in dbt - Coalesce 2023

Now that you can write models in Python, a new world of possibility has opened up. In this session, Christopher Marland introduces you to Snowpark and how it integrates with dbt, before demonstrating a real-world use case where Python transformations outperform SQL, starting from raw data and moving through to a completed analysis.

This talk is ideal for people who are familiar with PySpark but new to dbt, or who are experienced dbt users and curious about taking advantage of their new Pythonic superpowers from inside of a familiar development environment.

Speaker: Christopher Marland, Snowflake Solutions Architect, Aimpoint Digital

Register for Coalesce at https://coalesce.getdbt.com

Designing a Modern Application Data Stack

Today's massive datasets represent an unprecedented opportunity for organizations to build data-intensive applications. With this report, product leads, architects, and others who deal with applications and application development will explore why a cloud data platform is a great fit for data-intensive applications. You'll learn how to carefully consider scalability, data processing, and application distribution when making data app design decisions. Cloud data platforms are the modern infrastructure choice for data applications, as they offer improved scalability, elasticity, and cost efficiency. With a better understanding of data-intensive application architectures on cloud-based data platforms and the best practices outlined in this report, application teams can take full advantage of advances in data processing and app distribution to accelerate development, deployment, and adoption cycles. With this insightful report, you will: Learn why a modern cloud data platform is essential for building data-intensive applications Explore how scalability, data processing, and distribution models are key for today's data apps Implement best practices to improve application scalability and simplify data processing for efficiency gains Modernize application distribution plans to meet the needs of app providers and consumers About the authors: Adam Morton works with Intelligen Group, a Snowflake pure-play data and analytics consultancy. Kevin McGinley is technical director of the Snowflake customer acceleration team. Brad Culberson is a data platform architect specializing in data applications at Snowflake.

Data and monolith: Scaling a computationally slim 1500+ model beast - Coalesce 2023

Learn how ClickUp uses dbt, dbt packages, and Snowflake to save on storage and compute costs using Slim CI and how they empower a data warehouse centric culture across Sales, Marketing, Product Growth, Finance, and RevOps all while maintaining one monolithic dbt build job.

Speaker: Michael Revelo, Data Platform Lead , ClickUp

Register for Coalesce at https://coalesce.getdbt.com

How Rebtel increased data product value: A migration story - Coalesce 2023

In this session, you'll learn about Rebtel's migration journey from a legacy architecture to the modern data stack. Due to the challenges of Rebtel's stack, the data product value was decreasing in the company, it was time to migrate. Learn how the team is using dbt Cloud and Snowflake to achieve greater success in delivering value to the business. You'll leave with a richer understanding of how to plan and execute a legacy migration.

Speaker: Quentin Coviaux, Data Engineer, Rebtel

Register for Coalesce at https://coalesce.getdbt.com

Becoming the exponential enterprise with analytics engineering and the Data Cloud - Coalesce 2023

Join Snowflake & Deloitte as they discuss how organizations can become exponential enterprise ready through the power of the Snowflake Data Cloud and dbt Cloud's ability to write, test, and ship reliable data in quick time. This session updates you on what your organization needs to do to become exponential enterprise ready. This session also shares examples of organizations that have already made the successful transformation and why they are winning in the market with dbt, Deloitte and Snowflake.

Speakers: Mathew Zele, Cloud & ISV Lead, Snowflake; Vivek Pradhan Lead Partner - Data and AI Platforms , Deloitte; Sagar Kulkarni Partner Sales Engineer, Snowflake

Register for Coalesce at https://coalesce.getdbt.com/

Summary

Building streaming applications has gotten substantially easier over the past several years. Despite this, it is still operationally challenging to deploy and maintain your own stream processing infrastructure. Decodable was built with a mission of eliminating all of the painful aspects of developing and deploying stream processing systems for engineering teams. In this episode Eric Sammer discusses why more companies are including real-time capabilities in their products and the ways that Decodable makes it faster and easier.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize today to get 2 weeks free! As more people start using AI for projects, two things are clear: It’s a rapidly advancing field, but it’s tough to navigate. How can you get the best results for your use case? Instead of being subjected to a bunch of buzzword bingo, hear directly from pioneers in the developer and data science space on how they use graph tech to build AI-powered apps. . Attend the dev and ML talks at NODES 2023, a free online conference on October 26 featuring some of the brightest minds in tech. Check out the agenda and register today at Neo4j.com/NODES. Your host is Tobias Macey and today I'm interviewing Eric Sammer about starting your stream processing journey with Decodable

Interview

Introduction How did you get involved in the area of data management? Can you describe what Decodable is and the story behind it?

What are the notable changes to the Decodable platform since we last spoke? (October 2021) What are the industry shifts that have influenced the product direction?

What are the problems that customers are trying to solve when they come to Decodable? When you launched your focus was on SQL transformations of streaming data. What was the process for adding full Java support in addition to SQL? What are the developer experience challenges that are particular to working with streaming data?

How have you worked to address that in the Decodable platform and interfaces?

As you evolve the technical and product direction, what is your heuristic for balancing the unification of interfaces and system integration against the ability to swap different components or interfaces as new technologies are introduced? What are the most interesting, innovative, or unexpected ways that you have seen Decodable used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Decodable? When is Decodable the wrong choice? What do you have planned for the future of Decodable?

Contact Info

esammer on GitHub LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Decodable

Podcast Episode

Understanding the Apache Flink Journey Flink

Podcast Episode

Debezium

Podcast Episode

Kafka Redpanda

Podcast Episode

Kinesis PostgreSQL

Podcast Episode

Snowflake

Podcast Episode

Databricks Startree Pinot

Podcast Episode

Rockset

Podcast Episode

Druid InfluxDB Samza Storm Pulsar

Podcast Episode

ksqlDB

Podcast Episode

dbt GitHub Actions Airbyte Singer Splunk Outbox Pattern

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Sponsored By: Neo4J: NODES Conference Logo

NODES 2023 is a free online conference focused on graph-driven innovations with content for all skill levels. Its 24 hours are packed with 90 interactive technical sessions from top developers and data scientists across the world covering a broad range of topics and use cases. The event tracks: - Intelligent Applications: APIs, Libraries, and Frameworks – Tools and best practices for creating graph-powered applications and APIs with any software stack and programming language, including Java, Python, and JavaScript - Machine Learning and AI – How graph technology provides context for your data and enhances the accuracy of your AI and ML projects (e.g.: graph neural networks, responsible AI) - Visualization: Tools, Techniques, and Best Practices – Techniques and tools for exploring hidden and unknown patterns in your data and presenting complex relationships (knowledge graphs, ethical data practices, and data representation)

Don’t miss your chance to hear about the latest graph-powered implementations and best practices for free on October 26 at NODES 2023. Go to Neo4j.com/NODES today to see the full agenda and register!Rudderstack: Rudderstack

Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstackMaterialize: Materialize

You shouldn't have to throw away the database to build with fast-changing data. Keep the familiar SQL, keep the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date.

That is Materialize, the only true SQL streaming database built from the ground up to meet the needs of modern data products: Fresh, Correct, Scalable — all in a familiar SQL UI. Built on Timely Dataflow and Differential Dataflow, open source frameworks created by cofounder Frank McSherry at Microsoft Research, Materialize is trusted by data and engineering teams at Ramp, Pluralsight, Onward and more to build real-time data products without the cost, complexity, and development time of stream processing.

Go to materialize.com today and get 2 weeks free!Datafold: Datafold

This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare…

Architecting Data and Machine Learning Platforms

All cloud architects need to know how to build data platforms that enable businesses to make data-driven decisions and deliver enterprise-wide intelligence in a fast and efficient way. This handbook shows you how to design, build, and modernize cloud native data and machine learning platforms using AWS, Azure, Google Cloud, and multicloud tools like Snowflake and Databricks. Authors Marco Tranquillin, Valliappa Lakshmanan, and Firat Tekiner cover the entire data lifecycle from ingestion to activation in a cloud environment using real-world enterprise architectures. You'll learn how to transform, secure, and modernize familiar solutions like data warehouses and data lakes, and you'll be able to leverage recent AI/ML patterns to get accurate and quicker insights to drive competitive advantage. You'll learn how to: Design a modern and secure cloud native or hybrid data analytics and machine learning platform Accelerate data-led innovation by consolidating enterprise data in a governed, scalable, and resilient data platform Democratize access to enterprise data and govern how business teams extract insights and build AI/ML capabilities Enable your business to make decisions in real time using streaming pipelines Build an MLOps platform to move to a predictive and prescriptive analytics approach

Bob Muglia joins the show to talk about why we need to move beyond SQL, semantic models, the power of knowledge graphs, the past, present, and future of databases, and much more.

Bob has a storied history in the data space, having significant involvement with Microsoft Office, SQL Server, Snowflake (former CEO), and much more. It's rare to meet someone with such a deep involvement in the creation of our industry, and this was a fascinating conversation.

Also, check out Bob's book, "The Datapreneurs." One of the best books about the data industry I've read in ages.

LinkedIn: https://www.linkedin.com/in/bob-muglia/

X: https://twitter.com/Bob_Muglia

Book: https://www.amazon.com/Datapreneurs-Promise-Creators-Building-Future-ebook/dp/B0BZQFJ5RP

Learning Snowflake SQL and Scripting

To help you on the path to becoming a Snowflake pro, this concise yet comprehensive guide reviews fundamentals and best practices for Snowflake's SQL and Scripting languages. Developers and data professionals will learn how to generate, modify, and query data in the Snowflake relational database management system as well as how to apply analytic functions for reporting. Author Alan Beaulieu also shows you how to create scripts, stored functions, and stored procedures to return data sets using Snowflake Scripting. This book is ideal whether you're new to databases and need to run queries or reports against a Snowflake database, or transitioning from databases such as Oracle, SQL Server, or MySQL to cloud-based platforms. With this book, you will: Generate and modify Snowflake data using INSERT, UPDATE, DELETE Query data in Snowflake using SELECT, including joining multiple tables, using subqueries, and grouping Apply analytic functions for performing subtotals, grand totals, row comparisons, and other reporting functionality Build scripts combining SQL statements with looping, if-then-else, and exception handling Learn how to build stored procedures and functions Use stored procedures to return data sets

Improving Urban Mobility through Geospatial Analytics | Fawad A. Qureshi | Snowflake

Fawad A. Qureshi, Industry Field CTO at Snowflake, explores how Voi Technologies, a Swedish e-scooter sharing company, is revolutionizing urban mobility through the power of geospatial analytics.

He also discusses the challenges of urban transportation and how Voi is using data-driven insights to optimize scooter placement, improve safety, and create more efficient transportation networks. By harnessing the power of geospatial analytics, Voi is redefining urban mobility and improving the overall quality of life in cities. Join us to learn how data can transform the way we move around our cities.

To learn more about mobility check out: https://carto.com/solutions/mobility-planning

Summary

As businesses increasingly invest in technology and talent focused on data engineering and analytics, they want to know whether they are benefiting. So how do you calculate the return on investment for data? In this episode Barr Moses and Anna Filippova explore that question and provide useful exercises to start answering that in your company.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack Your host is Tobias Macey and today I'm interviewing Barr Moses and Anna Filippova about how and whether to measure the ROI of your data team

Interview

Introduction How did you get involved in the area of data management? What are the typical motivations for measuring and tracking the ROI for a data team?

Who is responsible for collecting that information? How is that information used and by whom?

What are some of the downsides/risks of tracking this metric? (law of unintended consequences) What are the inputs to the number that constitutes the "investment"? infrastructure, payroll of employees on team, time spent working with other teams? What are the aspects of data work and its impact on the business that complicate a calculation of the "return" that is generated? How should teams think about measuring data team ROI? What are some concrete ROI metrics data teams can use?

What level of detail is useful? What dimensions should be used for segmenting the calculations?

How can visibility into this ROI metric be best used to inform the priorities and project scopes of the team? With so many tools in the modern data stack today, what is the role of technology in helping drive or measure this impact? How do your respective solutions, Monte Carlo and dbt, help teams measure and scale data value? With generative AI on the upswing of the hype cycle, what are the impacts that you see it having on data teams?

What are the unrealistic expectations that it will produce? How can it speed up time to delivery?

What are the most interesting, innovative, or unexpected ways that you have seen data team ROI calculated and/or used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on measuring the ROI of data teams? When is measuring ROI the wrong choice?

Contact Info

Barr

LinkedIn

Anna

LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Monte Carlo

Podcast Episode

dbt

Podcast Episode

JetBlue Snowflake Con Presentation Generative AI Large Language Models

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Sponsored By: Rudderstack: Rudderstack

Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guessw

Summary

All software systems are in a constant state of evolution. This makes it impossible to select a truly future-proof technology stack for your data platform, making an eventual migration inevitable. In this episode Gleb Mezhanskiy and Rob Goretsky share their experiences leading various data platform migrations, and the hard-won lessons that they learned so that you don't have to.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack Modern data teams are using Hex to 10x their data impact. Hex combines a notebook style UI with an interactive report builder. This allows data teams to both dive deep to find insights and then share their work in an easy-to-read format to the whole org. In Hex you can use SQL, Python, R, and no-code visualization together to explore, transform, and model data. Hex also has AI built directly into the workflow to help you generate, edit, explain and document your code. The best data teams in the world such as the ones at Notion, AngelList, and Anthropic use Hex for ad hoc investigations, creating machine learning models, and building operational dashboards for the rest of their company. Hex makes it easy for data analysts and data scientists to collaborate together and produce work that has an impact. Make your data team unstoppable with Hex. Sign up today at dataengineeringpodcast.com/hex to get a 30-day free trial for your team! Your host is Tobias Macey and today I'm interviewing Gleb Mezhanskiy and Rob Goretsky about when and how to think about migrating your data stack

Interview

Introduction How did you get involved in the area of data management? A migration can be anything from a minor task to a major undertaking. Can you start by describing what constitutes a migration for the purposes of this conversation? Is it possible to completely avoid having to invest in a migration? What are the signals that point to the need for a migration?

What are some of the sources of cost that need to be accounted for when considering a migration? (both in terms of doing one, and the costs of not doing one) What are some signals that a migration is not the right solution for a perceived problem?

Once the decision has been made that a migration is necessary, what are the questions that the team should be asking to determine the technologies to move to and the sequencing of execution? What are the preceding tasks that should be completed before starting the migration to ensure there is no breakage downstream of the changing component(s)? What are some of the ways that a migration effort might fail? What are the major pitfalls that teams need to be aware of as they work through a data platform migration? What are the opportunities for automation during the migration process? What are the most interesting, innovative, or unexpected ways that you have seen teams approach a platform migration? What are the most interesting, unexpected, or challenging lessons that you have learned while working on data platform migrations? What are some ways that the technologies and patterns that we use can be evolved to reduce the cost/impact/need for migraitons?

Contact Info

Gleb

LinkedIn @glebmm on Twitter

Rob

LinkedIn RobGoretsky on GitHub

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Datafold

Podcast Episode

Informatica Airflow Snowflake

Podcast Episode

Redshift Eventbrite Teradata BigQuery Trino EMR == Elastic Map-Reduce Shadow IT

Podcast Episode

Mode Analytics Looker Sunk Cost Fallacy data-diff

Podcast Episode

SQLGlot Dagster dbt

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Sponsored By: Hex: Hex Tech Logo

Hex is a collaborative workspace for data science and analytics. A single place for teams to explore, transform, and visualize data into beautiful interactive reports. Use SQL, Python, R, no-code and AI to find and share insights across your organization. Empower everyone in an organization to make an impact with data. Sign up today at [dataengineeringpodcast.com/hex](https://www.dataengineeringpodcast.com/hex} and get 30 days free!Rudderstack: Rudderstack

Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstackSupport Data Engineering Podcast

Comparing Databricks and Snowflake for Machine Learning

Snowflake and Databricks both aim to provide data science toolkits for machine learning workflows, albeit with different approaches and resources. While developing ML models is technically possible using either platform, the Hitachi Solutions Empower team tested which solution will be easier, faster, and cheaper to work with in terms of both user experience and business outcomes for our customers. To do this, we designed and conducted a series of experiments with use cases from the TPCx-AI benchmark standard. We developed both single-node and multi-node versions of these experiments, which sometimes required us to set up separate compute infrastructure outside of the platform, in the case of Snowflake. We also built datasets of various sizes (1GB, 10GB, and 100GB), to assess how each platform/node setup handles scale.

Based on our findings, on the average, Databricks is faster, cheaper, and easier to use for developing machine learning models, and we use it exclusively for data science on the Empower platform. Snowflake’s reliance on third party resources for distributed training is a major drawback, and the need to use multiple compute environments to scale up training is complex and, in our view, an unnecessary complication to achieve best results.

Talk by: Michael Green and Don Scott

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

Planning and Executing a Snowflake Data Warehouse Migration to Databricks

Organizations are going through a critical phase of data infrastructure modernization, laying the foundation for the future, and adapting to support growing data and AI needs. Organizations that embraced cloud data warehouses (CDW) such as Snowflake have ended up trying to use a data warehousing tool for ETL pipelines and data science. This created unnecessary complexity and resulted in poor performance since data warehouses are optimized for SQL-based analytics only.

Realizing the limitation and pain with cloud data warehouses, organizations are turning to a lakehouse-first architecture. Though a cloud platform to cloud platform migration should be relatively easy, the breadth of the Databricks platform provides flexibility and hence requires careful planning and execution. In this session, we present the migration methodology, technical approaches, automation tools, product/feature mapping, a technical demo and best practices using real-world case studies for migrating data, ELT pipelines and warehouses from Snowflake to Databricks.

Talk by: Satish Garla and Ramachandran Venkat

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksin

Summary

Data has been one of the most substantial drivers of business and economic value for the past few decades. Bob Muglia has had a front-row seat to many of the major shifts driven by technology over his career. In his recent book "Datapreneurs" he reflects on the people and businesses that he has known and worked with and how they relied on data to deliver valuable services and drive meaningful change.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack Your host is Tobias Macey and today I'm interviewing Bob Muglia about his recent book about the idea of "Datapreneurs" and the role of data in the modern economy

Interview

Introduction How did you get involved in the area of data management? Can you describe what your concept of a "Datapreneur" is?

How is this distinct from the common idea of an entreprenur?

What do you see as the key inflection points in data technologies and their impacts on business capabilities over the past ~30 years? In your role as the CEO of Snowflake you had a first-row seat for the rise of the "modern data stack". What do you see as the main positive and negative impacts of that paradigm?

What are the key issues that are yet to be solved in that ecosmnjjystem?

For technologists who are thinking about launching new ventures, what are the key pieces of advice that you would like to share? What do you see as the short/medium/long-term impact of AI on the technical, business, and societal arenas? What are the most interesting, innovative, or unexpected ways that you have seen business leaders use data to drive their vision? What are the most interesting, unexpected, or challenging lessons that you have learned while working on the Datapreneurs book? What are your key predictions for the future impact of data on the technical/economic/business landscapes?

Contact Info

LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Datapreneurs Book SQL Server Snowflake Z80 Processor Navigational Database System R Redshift Microsoft Fabric Databricks Looker Fivetran

Podcast Episode

Databricks Unity Catalog RelationalAI 6th Normal Form Pinecone Vector DB

Podcast Episode

Perplexity AI

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Sponsored By: Rudderstack: Rudderstack

Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstackSupport Data Engineering Podcast