talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (132 results)

See all 132 →
Showing 3 results

Activities & events

Title & Speakers Event
Tomer Shiran – author , Jason Hughes – author , Alex Merced – author

Traditional data architecture patterns are severely limited. To use these patterns, you have to ETL data into each tool—a cost-prohibitive process for making warehouse features available to all of your data. The lack of flexibility with these patterns requires you to lock into a set of priority tools and formats, which creates data silos and data drift. This practical book shows you a better way. Apache Iceberg provides the capabilities, performance, scalability, and savings that fulfill the promise of an open data lakehouse. By following the lessons in this book, you'll be able to achieve interactive, batch, machine learning, and streaming analytics with this high-performance open source format. Authors Tomer Shiran, Jason Hughes, and Alex Merced from Dremio show you how to get started with Iceberg. With this book, you'll learn: The architecture of Apache Iceberg tables What happens under the hood when you perform operations on Iceberg tables How to further optimize Iceberg tables for maximum performance How to use Iceberg with popular data engines such as Apache Spark, Apache Flink, and Dremio Discover why Apache Iceberg is a foundational technology for implementing an open data lakehouse.

data data-engineering storage-repositories data-lake apache-iceberg AI/ML Analytics Flink Data Lakehouse Dremio ETL/ELT Iceberg Spark Data Streaming
O'Reilly Data Engineering Books
Jason Hughes – guest @ Dremio , Tobias Macey – host

Summary The "data lakehouse" architecture balances the scalability and flexibility of data lakes with the ease of use and transaction support of data warehouses. Dremio is one of the companies leading the development of products and services that support the open lakehouse. In this episode Jason Hughes explains what it means for a lakehouse to be "open" and describes the different components that the Dremio team build and contribute to.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! You wake up to a Slack message from your CEO, who’s upset because the company’s revenue dashboard is broken. You’re told to fix it before this morning’s board meeting, which is just minutes away. Enter Metaplane, the industry’s only self-serve data observability tool. In just a few clicks, you identify the issue’s root cause, conduct an impact analysis⁠—and save the day. Data leaders at Imperfect Foods, Drift, and Vendr love Metaplane because it helps them catch, investigate, and fix data quality issues before their stakeholders ever notice they exist. Setup takes 30 minutes. You can literally get up and running with Metaplane by the end of this podcast. Sign up for a free-forever plan at dataengineeringpodcast.com/metaplane, or try out their most advanced features with a 14-day free trial. Mention the podcast to get a free "In Data We Trust World Tour" t-shirt. RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. Your host is Tobias Macey and today I’m interviewing Jason Hughes about the work that Dremio is doing to support the open lakehouse

Interview

Introduction How did you get involved in the area of data management? Can you d

AWS Azure BigQuery CDP Cloud Computing Dashboard Data Engineering Data Lake Data Lakehouse Data Management Data Quality Databricks Dremio ETL/ELT GCP Java Kubernetes MongoDB MySQL postgresql Python Scala Snowflake Spark SQL Data Streaming
Data Engineering Podcast
Beautiful Data 2009-07-21
Jeff Hammerbacher – author , Toby Segaran – author

In this insightful book, you'll learn from the best data practitioners in the field just how wide-ranging -- and beautiful -- working with data can be. Join 39 contributors as they explain how they developed simple and elegant solutions on projects ranging from the Mars lander to a Radiohead video. With Beautiful Data, you will: Explore the opportunities and challenges involved in working with the vast number of datasets made available by the Web Learn how to visualize trends in urban crime, using maps and data mashups Discover the challenges of designing a data processing system that works within the constraints of space travel Learn how crowdsourcing and transparency have combined to advance the state of drug research Understand how new data can automatically trigger alerts when it matches or overlaps pre-existing data Learn about the massive infrastructure required to create, capture, and process DNA data That's only small sample of what you'll find in Beautiful Data. For anyone who handles data, this is a truly fascinating book. Contributors include: Nathan Yau Jonathan Follett and Matt Holm J.M. Hughes Raghu Ramakrishnan, Brian Cooper, and Utkarsh Srivastava Jeff Hammerbacher Jason Dykes and Jo Wood Jeff Jonas and Lisa Sokol Jud Valeski Alon Halevy and Jayant Madhavan Aaron Koblin with Valdean Klump Michal Migurski Jeff Heer Coco Krumme Peter Norvig Matt Wood and Ben Blackburne Jean-Claude Bradley, Rajarshi Guha, Andrew Lang, Pierre Lindenbaum, Cameron Neylon, Antony Williams, and Egon Willighagen Lukas Biewald and Brendan O'Connor Hadley Wickham, Deborah Swayne, and David Poole Andrew Gelman, Jonathan P. Kastellec, and Yair Ghitza Toby Segaran

data data-engineering
O'Reilly Data Engineering Books
Showing 3 results