talk-data.com
Topic
Data Lakehouse
489
tagged
Activity Trend
Top Events
O que é um Data Lakehouse? Parece mais uma nova modinha, mas não: é uma nova forma de se construir uma Plataforma que facilita e democratiza o acesso a dados, desde sua criação. Legal né? Essa e muitas outras discussões permearam nosso episódio 44, com a presença dos feras em Data Engineering do Grupo Boticário.
Trouxemos as grandes referências do GB em Engenharia e Arquitetura de Dados para dar essa aula pra gente: Robson Mendonça (Gerente SR Engenharia de Dados), Edson Junior (Gerente de Engenharia de Dados) Marcus Bittencourt (Gerente de Arquitetura e Plataforma de Dados).
Veja os links do episódio no nosso post do Medium: https://medium.com/data-hackers/construindo-data-lakehouse-e-muito-mais-no-grupo-botic%C3%A1rio-data-hackers-podcast-44-20d67f05cfa4
Summary Data lakes have been gaining popularity alongside an increase in their sophistication and usability. Despite improvements in performance and data architecture they still require significant knowledge and experience to deploy and manage. In this episode Vikrant Dubey discusses his work on the Cuelake project which allows data analysts to build a lakehouse with SQL queries. By building on top of Zeppelin, Spark, and Iceberg he and his team at Cuebook have built an autoscaled cloud native system that abstracts the underlying complexity.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Are you bored with writing scripts to move data into SaaS tools like Salesforce, Marketo, or Facebook Ads? Hightouch is the easiest way to sync data into the platforms that your business teams rely on. The data you’re looking for is already in your data warehouse and BI tools. Connect your warehouse to Hightouch, paste a SQL query, and use their visual mapper to specify how data should appear in your SaaS systems. No more scripts, just SQL. Supercharge your business teams with customer data using Hightouch for Reverse ETL today. Get started for free at dataengineeringpodcast.com/hightouch. Have you ever had to develop ad-hoc solutions for security, privacy, and compliance requirements? Are you spending too much of your engineering resources on creating database views, configuring database permissions, and manually granting and revoking access to sensitive data? Satori has built the first DataSecOps Platform that streamlines data access and security. Satori’s DataSecOps automates data access controls, permissions, and masking for all major data platforms such as Snowflake, Redshift and SQL Server and even delegates data access management to business users, helping you move your organization from default data access to need-to-know access. Go to dataengineeringpodcast.com/satori today and get a $5K credit for your next Satori subscription. Your host is Tobias Macey and today I’m interviewing Vikrant Dubey about Cuebook and their Cuelake project for building ELT pipelines for your data lakehouse entirely in SQL
Interview
Introduction How did you get involved in the area of data management? Can you describe what Cuelake is and the story behind it? There are a number of platforms and projects for running SQL workloads and transformations on a data lake. What was lacking in those systems that you are addressing with Cuelake? Who are the target users of Cuelake and how has that influenced the features and design of the system? Can you describe how Cuelake is implemented?
What was your selection process for the various components?
What are some of the sharp edges that you have had to work around when integrating these components? What involved in getting Cuelake deployed? How are you using Cuelake in your work at Cuebook? Given your focus on machine learning for anomaly detection of business metrics, what are the challenges that you faced in using a data warehouse for those workloads?
What are the advantages that a data lake/lakehouse architecture maintains over a warehouse? What are the shortcomings of the lake/lakehouse approach that are solved by using a warehouse?
What are the most interesting, in
This audio blog is about the data lakehouse and how it is the latest incantation from a handful of data lake providers to usurp the rapidly changing cloud data warehousing market. It is one of three blogs featured in the data lakehouse series.
Originally published at: https://www.eckerson.com/articles/all-hail-the-data-lakehouse-if-built-on-a-modern-data-warehouse
This is an audio blog about the perplexities of the Data Lakehouse and if it is, indeed, the "paradigm of the decade". To hear more of Eckerson Group perspectives on the data lakehouse be sure to check out the blogs from colleagues, Wayne Eckerson and Kevin Petrie, and the recording of our recent Shop Talk discussion.
Originally published at: https://www.eckerson.com/articles/an-architect-s-view-of-the-data-lakehouse-perplexity-and-perspective
This audio blog discusses the Data Lakehouse, a marketing concept that evokes clean PowerPoint imagery, and why and how the New Cloud Data Lake will play a very real role in modern enterprise environments.
Originally published at: https://www.eckerson.com/articles/data-lakehouses-hold-water-thanks-to-the-cloud-data-lake
There are a lot of amazing AI features being announced at Google Cloud Next. In order to take full advantage of these, you need to make sure your data is being managed in a secure, centralized way. In this talk, you’ll learn how to set up your lakehouse to get your data ready for downstream workloads. You’ll view a demo involving an architecture of Google Cloud products that includes managing permissions on your data, configuring metadata management, and performing transformations using open source frameworks.
This course provides a comprehensive overview of Databricks’ modern approach to data warehousing, highlighting how a data lakehouse architecture combines the strengths of traditional data warehouses with the flexibility and scalability of the cloud. You’ll learn about the AI-driven features that enhance data transformation and analysis on the Databricks Data Intelligence Platform. Designed for data warehousing practitioners, this course provides you with the foundational information needed to begin building and managing high-performant, AI-powered data warehouses on Databricks. This course is designed for those starting out in data warehousing and those who would like to execute data warehousing workloads on Databricks. Participants may also include data warehousing practitioners who are familiar with traditional data warehousing techniques and concepts and are looking to expand their understanding of how data warehousing workloads are executed on Databricks.