talk-data.com talk-data.com

Topic

Cloud Computing

infrastructure saas iaas

4055

tagged

Activity Trend

471 peak/qtr
2020-Q1 2026-Q1

Activities

4055 activities · Newest first

On today’s episode, we’re talking to Gautam Ijoor, President and CEO of Alpha Omega Integration, a company that creates new possibilities through intelligent end-to-end mission-focused government IT solutions.

We talk about:

  • Gautam’s background and his entrepreneurial journey.
  • How Alpha Omega works and the areas they focus on.
  • How Gautam sees SaaS in relation to government.
  • Are concerns about putting data in the cloud over, or is there still work to do?
  • The potential for SaaS companies in the federal contracting space.
  • The importance of ease of use in SaaS.
  • The drawbacks of subscription services for governments.

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas #analytics #AWS #BI

SQL Server 2022 Query Performance Tuning: Troubleshoot and Optimize Query Performance

Troubleshoot slow-performing queries and make them run faster. Database administrators and SQL developers are constantly under pressure to provide more speed. This new edition has been redesigned and rewritten from scratch based on the last 15 years of learning, knowledge, and experience accumulated by the author. The book Includes expanded information on using extended events, automatic execution plan correction, and other advanced features now available in SQL Server. These modern features are covered while still providing the necessary fundamentals to better understand how statistics and indexes affect query performance. The book gives you knowledge and tools to help you identify poorly performing queries and understand the possible causes of that poor performance. The book also provides mechanisms for resolving the issues identified, whether on-premises, in containers, or on cloud platform providers. You’ll learn about key fundamentals, such as statistics, data distribution, cardinality, and parameter sniffing. You’ll learn to analyze and design your indexes and your queries using best practices that ward off performance problems before they occur. You’ll also learn to use important modern features, such as Query Store to manage and control execution plans, the automated performance tuning feature set, and memory-optimized OLTP tables and procedures. You will be able to troubleshoot in a systematic way. Query tuning doesn’t have to be difficult. This book helps you to make it much easier. What You Will Learn Use Query Store to understand and easily change query performance Recognize and eliminate bottlenecks leading to slow performance Tune queries whether on-premises, in containers, or on cloud platform providers Implement best practices in T-SQL to minimize performance risk Design in the performance that you need through careful query and index design Understand how built-in, automatic tuning can assist your performance enhancement efforts Protect query performance during upgrades to the newer versions of SQL Server Who This Book Is For Developers and database administrators with responsibility for query performance in SQL Server environments, and anyone responsible for writing or creating T-SQL queries and in need of insight into bottlenecks (including how to identify them, understand them, and eliminate them)

Summary Despite the best efforts of data engineers, data is as messy as the real world. Entity resolution and fuzzy matching are powerful utilities for cleaning up data from disconnected sources, but it has typically required custom development and training machine learning models. Sonal Goyal created and open-sourced Zingg as a generalized tool for data mastering and entity resolution to reduce the effort involved in adopting those practices. In this episode she shares the story behind the project, the details of how it is implemented, and how you can use it for your own data projects.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their extensive library of integrations enable you to automatically send data to hundreds of downstream tools. Sign up free at dataengineeringpodcast.com/rudder Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. Your host is Tobias Macey and today I’m interviewing Sonal Goyal about Zingg, an open source entity resolution frame

SQL Server 2022 Revealed: A Hybrid Data Platform Powered by Security, Performance, and Availability

Know how to use the new capabilities and cloud integrations in SQL Server 2022. This book covers the many innovative integrations with the Azure Cloud that make SQL Server 2022 the most cloud-connected edition ever. The book covers cutting-edge features such as the blockchain-based Ledger for creating a tamper-evident record of changes to data over time that you can rely on to be correct and reliable. You'll learn about built-in Query Intelligence capabilities to help you to upgrade with confidence that your applications will perform at least as fast after the upgrade than before. In fact, you'll probably see an increase in performance from the upgrade, with no code changes needed. Also covered are innovations such as contained availability groups and data virtualization with S3 object storage. New cloud integrations covered in this book include Microsoft Azure Purview and the use of Azure SQL for high availability and disaster recovery. The bookcovers Azure Synapse Link with its built-in capabilities to take changes and put them into Synapse automatically. Anyone building their career around SQL Server will want this book for the valuable information it provides on building SQL skills from edge to the cloud. ​ What You Will Learn Know how to use all of the new capabilities and cloud integrations in SQL Server 2022 Connect to Azure for disaster recovery, near real-time analytics, and security Leverage the Ledger to create a tamper-evident record of data changes over time Upgrade from prior releases and achieve faster and more consistent performance with no code changes Access data and storage in different and new formats, such as Parquet and S3, without moving the data and using your existing T-SQL skills Explore new application scenarios using innovations with T-SQL in areassuch as JSON and time series Who This Book Is For SQL Server professionals who want to upgrade their skills to the latest edition of SQL Server; those wishing to take advantage of new integrations with Microsoft Azure Purview (governance), Azure Synapse (analytics), and Azure SQL (HA and DR); and those in need of the increased performance and security offered by Query Intelligence and the new Ledger

Summary One of the most impactful technologies for data analytics in recent years has been dbt. It’s hard to have a conversation about data engineering or analysis without mentioning it. Despite its widespread adoption there are still rough edges in its workflow that cause friction for data analysts. To help simplify the adoption and management of dbt projects Nandam Karthik helped create Optimus. In this episode he shares his experiences working with organizations to adopt analytics engineering patterns and the ways that Optimus and dbt were combined to let data analysts deliver insights without the roadblocks of complex pipeline management.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their extensive library of integrations enable you to automatically send data to hundreds of downstream tools. Sign up free at dataengineeringpodcast.com/rudder Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. Your host is Tobias Macey and today I’m interviewing Nand

Architecting Solutions with SAP Business Technology Platform

Gain a comprehensive understanding of SAP Business Technology Platform (SAP BTP) and its role in the intelligent enterprise. This book provides you with the knowledge and skills to design and implement effective architectural solutions. You'll explore integration strategies, extensibility options, and data processing methods to innovate and enhance your organization's SAP ecosystem. What this Book will help me do Architect enterprise solutions with SAP BTP to address key integration challenges. Leverage SAP BTP tools for process automation and effective solution extensibility. Understand non-functional requirements such as operability and security. Drive innovation by integrating SAP's intelligent technologies into your designs. Utilize SAP BTP to derive actionable insights from business data for value generation. Author(s) Serdar Simsekler and None Du are experienced professionals in the field of SAP architecture and technology. They bring years of expertise in building enterprise solutions leveraging the latest SAP innovations. Their approachable writing style aims to connect technical concepts with practical enterprise applications, ensuring readers can directly apply the knowledge gained. Who is it for? This book is intended for technical architects, solution architects, and enterprise architects who are working with or intending to adopt SAP Business Technology Platform. It is ideal for those seeking to enhance their understanding of SAP's solution ecosystem and deliver innovative systems. A foundational knowledge of IT systems and basic cloud concepts is assumed, as is familiarity with the SAP framework.

Developing on dbt Cloud

The dbt Cloud IDE has gotten a major upgrade. With a fresh new coat of paint, a re-written codebase, and a suite of oft-requested features, the new IDE should amount to a significant quality of life improvement for your team – allowing you to deliver even more value to the business. Experience it in action, and see how to safely distribute data development work in this hands-on session with the dbt Labs product team.

Check the slides here: https://docs.google.com/presentation/d/11-71MIh9ASGM2n-i0KxXc_yf6w1tq0l1bUobWdnfloY/edit?usp=sharing

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

How Cisco powers its modern ELT approach with Fivetran and dbt Labs

Fivetran and dbt Cloud continue to gain traction and popularity among some of the most innovative companies, and for good reason. When combined, Fivetran and dbt Cloud provide an automated, performant, end-to-end ELT pipeline. Join our session where Nikolay Voronchikhin will walk us through Cisco’s journey to building a successful data stack with Fivetran and dbt Cloud. Discover best practices to maximize this powerful combination in your business.

Check the slides here: https://docs.google.com/presentation/d/1-rPuzq0om7ePXXcXFd4lBi72L5svZHNjkx0RWLGgUo8/edit?usp=sharing

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

How to leverage dbt Community as the first & ONLY data hire to survive

As data science and machine learning adoption grew over the last few years, Python moved up the ranks catching up to SQL in popularity in the world of data processing. SQL and Python are both powerful on their own, but their value in modern analytics is highest when they work together. This was a key motivator for us at Snowflake to build Snowpark for Python: to help modern analytics, data engineering, and data science teams generate insights without complex infrastructure management for separate languages.

Join this session to learn more about how dbt's new support for Python-based models and Snowpark for Python can help polyglot data teams get more value from their data through secure, efficient and performant metrics stores, feature stores, or data factories in the Data Cloud.

Check the slides here: https://docs.google.com/presentation/d/1xJEyfg81azw2hVilhGZ5BptnAQo8q1L7aDLGrnSYoUM/edit?usp=sharing

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

Building Leverage with dbt Labs at Airbyte

Airbyte supports loading data into a wide array of databases and data warehouses. We must enforce the same structure and transformations in each of these tools, and writing different transformations for each would be prohibitive. Instead, we use dbt to write this code once and reuse it for every database and data warehouse that we support. In an effort to improve our support across all these tools, we are also introducing a dbt Cloud integration within Airbyte Cloud. This will allow Airbyte Cloud users to leverage the lessons we’ve learned and build their own custom transformations using dbt Cloud.

Check the slides here: https://docs.google.com/presentation/d/19asIBrCgs04dJ07zhb1cosYEQHQC0yqEVSMlLcUymZY/edit?usp=sharing

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

Building a Data Platform from Scratch with dbt, Snowflake and Looker

When Prateek Chawla, founding engineer, joined Monte Carlo in 2019, he was responsible for spinning up our data platform from scratch. He was more of a backend/cloud engineer, but like with any startup had to wear many hats, so got the opportunity to play the role of data engineer too. In this talk, we’ll walk through how we spun up Monte Calro’s data stack with Snowflake, Looker, and dbt, touching on how and why we implemented dbt (and later, dbt Cloud), key use cases, and handy tricks for integrating dbt with other popular tools, like Airflow, and Spark. We’ll discuss what worked, what didn’t work, and other lessons learned along the way, as well as share how our data stack evolved over time to scale to meet the demands of our growing startup. We’ll also touch on a very critical component of the dbt value proposition, data quality testing, and discuss some of our favorite tests and what we’ve done to automate and integrate them with other elements of our stack.

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

dbt Labs and Databricks: best practices and future roadmap

The Databricks Lakehouse Platform unifies the best of data warehouses and data lakes in one simple platform to handle all your data, analytics and AI use cases. Databricks now includes complete support for dbt Core and dbt Cloud and you will hear from Conde Nast using dbt and Databricks together to democratize insights. We will also share best practices for developing and productionizing dbt projects containing SQL and Python, governing data with standard SQL, and exciting features on our roadmap such as materialized views for Databricks SQL.

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

Extend the runway: A deep dive into data warehouse costs

The days of unrestrained cloud spending are coming to an end. In the coming months, data teams will increasingly be asked to better understand, monitor and reduce their data warehouse spend. The problem? Practitioners aren't equipped with the tools to do this well. In this talk, Ian Whitestone (Shopify) and Jonathan Talmi (Snapcommerce) will share their own end-to-end process for cutting warehouse spend, diving into the actual strategies they've deployed to give practitioners immediately actionable processes and techniques to keep costs controlled.

Check the slides here: https://docs.google.com/presentation/d/1SDONuIrAHbdDF0YyZ5krmAnibUTJbL9qosEeoDjFRPM/edit#slide=id.p

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

Adapting data at the speed of business with Sigma & dbt

For too long, there's been contention between business and data teams. Incoming requests to a data team span from adding one specific column for one specific use case, to updating logic for a point in time question—the priorities stack up, and data teams find themselves sifting through a myriad of lower-level requests, unable to work on higher, more transformational deliverables. At Sigma, we're flipping the script by enabling analysts and stakeholders alike to iterate and explore in a familiar spreadsheet UI, with the scale and performance unlocked by modern cloud data warehouses. In this talk, we'll cover how we deploy Sigma internally (powered by dbt and dbt cloud metadata) to truly give the power to generate insights from data back to the business, and create a more effective feedback loop when working with our own data team.

Check the slides here: https://docs.google.com/presentation/d/11jGG6OSwwjtT6gRVpse9VSPjW_mIpYPUfRBx5TSttjk/edit?usp=sharing

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

Automating CI/CD in dbt Cloud: Sunrun's story

Does a two-step deployment workflow for developing, testing, and deploying code to dbt Cloud sound possible? Sunrun thinks so. Join James Sorensen and Jared Stout to learn how they used Github Actions and API integrations with dbt Cloud and Jira to entirely automate the CI/CD workflow, saving the team time and worry when moving through SOX certification.

Check the slides here: https://docs.google.com/presentation/d/1ZecU0-TN8SxNFpdKdkVksuDjpUy6XiaulqBdfqhLb68/edit#slide=id.g15507761f0b_0_10

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

dbt as a Serverless Service

Are you sold on dbt, but unsure as to how you’ll handle deployment, orchestration and job scheduling? Are you evaluating dbt and looking for an easy way to spin up a proof of concept while seeking buy in from stakeholders? Look no further! In this workshop we will show you how to containerize your dbt project and execute jobs using GCP’s serverless computing products Cloud Run, Build and Scheduler. If you have an interest in dbt orchestration, devops, or serverless cloud architecture, this workshop is for you!

Check the slides here: https://docs.google.com/presentation/d/1NiG0MFkOvw5MNpCZFF74VDuX-jHZpO4a8bHUadukoPI/edit?usp=sharing

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

dbt Labs + Snowflake: Why SQL and Python go perfectly well together

As data science and machine learning adoption grew over the last few years, Python moved up the ranks catching up to SQL in popularity in the world of data processing. SQL and Python are both powerful on their own, but their value in modern analytics is highest when they work together. This was a key motivator for us at Snowflake to build Snowpark for Python: to help modern analytics, data engineering, and data science teams generate insights without complex infrastructure management for separate languages.

Join this session to learn more about how dbt's new support for Python-based models and Snowpark for Python can help polyglot data teams get more value from their data through secure, efficient and performant metrics stores, feature stores, or data factories in the Data Cloud.

Check Notion document here: https://www.notion.so/6382db82046f41599e9ec39afb035bdb

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

Jumpstart dbt: How to Achieve Speed and Scale

For enterprises integrating dbt into their transformation technology stack, of paramount importance is how to achieve speed and scale. Sure, dbt will "infinitely scale" due to its underlying cloud native deployment- but only in reference to hosting, execution, and other platform services. HOW does one onboard hundreds or thousands of users with repeatability, conformity, and engineering excellence by design? HOW does an organization integrate dependent platforms and services? Centrally monitor? Share reusable assets? Maintain security?... This talk identifies how Cisco enabled automated services and processes to achieve that scale. Walk away knowing what's in store for you if onboarding dbt, headwinds we faced, and the success Cisco is seeing in our chosen deployment paradigm.

Check the slides here: https://docs.google.com/presentation/d/1e4fG0_60APnCmFDV5a8X7sPOCbKlto3L/edit?usp=sharing&ouid=110293204340061069659&rtpof=true&sd=true

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

Preparing for the Next Wave: Data Apps

Data apps are the next wave in analytics engineering. The explosion of data volume and variety combined with an increasing demand for analytics by consumers, and a leap in cloud data technologies triggered an evolution of traditional analytics into the realms of modern data apps. Question is: How do you prepare for this wave? In this session we’ll explore real-world examples of modern data apps, and how the modern data stack is advancing to support sub-second and high concurrency analytics to meet the new wave of demand. We will cover: performance challenges, semi-structured data, data freshness, data modeling and toolsets.

Check the slides here: https://docs.google.com/presentation/d/1MC18SgT_ZHOJePjYizz_WT7dVveaycNw/edit?usp=sharing&ouid=110293204340061069659&rtpof=true&sd=true

Coalesce 2023 is coming! Register for free at https://coalesce.getdbt.com/.

Summary The database market has seen unprecedented activity in recent years, with new options addressing a variety of needs being introduced on a nearly constant basis. Despite that, there are a handful of databases that continue to be adopted due to their proven reliability and robust features. MariaDB is one of those default options that has continued to grow and innovate while offering a familiar and stable experience. In this episode field CTO Manjot Singh shares his experiences as an early user of MySQL and MariaDB and explains how the suite of products being built on top of the open source foundation address the growing needs for advanced storage and analytical capabilities.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! You wake up to a Slack message from your CEO, who’s upset because the company’s revenue dashboard is broken. You’re told to fix it before this morning’s board meeting, which is just minutes away. Enter Metaplane, the industry’s only self-serve data observability tool. In just a few clicks, you identify the issue’s root cause, conduct an impact analysis⁠—and save the day. Data leaders at Imperfect Foods, Drift, and Vendr love Metaplane because it helps them catch, investigate, and fix data quality issues before their stakeholders ever notice they exist. Setup takes 30 minutes. You can literally get up and running with Metaplane by the end of this podcast. Sign up for a free-forever plan at dataengineeringpodcast.com/metaplane, or try out their most advanced features with a 14-day free trial. Mention the podcast to get a free "In Data We Trust World Tour" t-shirt. RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when