talk-data.com talk-data.com

M

Speaker

Matteo Pelati

3

talks

guest

Filter by Event / Source

Talks & appearances

3 activities ยท Newest first

Search activities →
Matteo Pelati: Challenges of Building Blazing Fast Data APIs

Join Matteo Pelati as he delves into the world of blazing fast Data APIs, sharing his extensive experience in overcoming the challenges of crafting efficient, customer-facing data interfaces. ๐Ÿš€๐Ÿ“Š Discover valuable insights and leaner approaches, including the use of cutting-edge tools like Rust, in this enlightening session. ๐Ÿ› ๏ธ๐Ÿ”ฅ #DataAPIs #Efficiency

Panel Discussion | Data Mesh: Challenges and Opportunities

"Join experts Amy Raygada, Arne Laponin, Maciej Marek, Matteo Pelati, and Peter Farkas in a captivating Panel Discussion on the Challenges and Opportunities of Data Mesh. ๐ŸŒ๐Ÿค” Gain valuable insights and perspectives on this transformative approach to data architecture, and explore its potential in the evolving data landscape. ๐Ÿ—ฃ๏ธ๐Ÿ“Š #DataMesh #paneldiscussion

โœจ H I G H L I G H T S โœจ

๐Ÿ™Œ A huge shoutout to all the incredible participants who made Big Data Conference Europe 2023 in Vilnius, Lithuania, from November 21-24, an absolute triumph! ๐ŸŽ‰ Your attendance and active participation were instrumental in making this event so special. ๐ŸŒ

Don't forget to check out the session recordings from the conference to relive the valuable insights and knowledge shared! ๐Ÿ“ฝ๏ธ

Once again, THANK YOU for playing a pivotal role in the success of Big Data Conference Europe 2023. ๐Ÿš€ See you next year for another unforgettable conference! ๐Ÿ“… #BigDataConference #SeeYouNextYear

Summary

Real-time data processing has steadily been gaining adoption due to advances in the accessibility of the technologies involved. Despite that, it is still a complex set of capabilities. To bring streaming data in reach of application engineers Matteo Pelati helped to create Dozer. In this episode he explains how investing in high performance and operationally simplified streaming with a familiar API can yield significant benefits for software and data teams together.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack Modern data teams are using Hex to 10x their data impact. Hex combines a notebook style UI with an interactive report builder. This allows data teams to both dive deep to find insights and then share their work in an easy-to-read format to the whole org. In Hex you can use SQL, Python, R, and no-code visualization together to explore, transform, and model data. Hex also has AI built directly into the workflow to help you generate, edit, explain and document your code. The best data teams in the world such as the ones at Notion, AngelList, and Anthropic use Hex for ad hoc investigations, creating machine learning models, and building operational dashboards for the rest of their company. Hex makes it easy for data analysts and data scientists to collaborate together and produce work that has an impact. Make your data team unstoppable with Hex. Sign up today at dataengineeringpodcast.com/hex to get a 30-day free trial for your team! Your host is Tobias Macey and today I'm interviewing Matteo Pelati about Dozer, an open source engine that includes data ingestion, transformation, and API generation for real-time sources

Interview

Introduction How did you get involved in the area of data management? Can you describe what Dozer is and the story behind it?

What was your decision process for building Dozer as open source?

As you note in the documentation, Dozer has overlap with a number of technologies that are aimed at different use cases. What was missing from each of them and the center of their Venn diagram that prompted you to build Dozer? In addition to working in an interesting technological cross-section, you are also targeting a disparate group of personas. Who are you building Dozer for and what were the motivations for that vision?

What are the different use cases that you are focused on supporting? What are the features of Dozer that enable engineers to address those uses, and what makes it preferable to existing alternative approaches?

Can you describe how Dozer is implemented?

How have the design and goals of the platform changed since you first started working on it? What are the architectural "-ilities" that you are trying to optimize for?

What is involved in getting Dozer deployed and integrated into an existing application/data infrastructure? How can teams who are using Dozer extend/integrate with Dozer?

What does the development/deployment workflow look like for teams who are building on top of Dozer?

What is your governance model for Dozer and balancing the open source project against your business goals? What are the most interesting, innovative, or unexpected ways that you have seen Dozer used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Dozer? When is Dozer the wrong choice? What do you have planned for the future of Dozer?

Contact Info

LinkedIn @pelatimtt on Twitter

Parting Question

From your perspective, what is the bigge