talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (158 results)

See all 158 →

Companies (1 result)

Head of Ecosystem

Activities & events

Title & Speakers Event

Let’s kick things off for another Meetup, this time focusing on the collaboration of data scientists and data engineers, as well as data streaming in the VW environment. Join us on October 30th in Berlin and bring all your questions!

Tom Kaltofen: "What Data Scientists Actually Need from Data Engineers: A ‘Data Producer’ Perspective"

Tom Kaltofen is an Engineer at DHL Data & AI and a Creator at mloda.ai. In his keynote, he'll explore how data engineers can better support data scientists, BI, software engineers, analysts and management by understanding their real needs and designing data products accordingly. He’ll share practical lessons from his own industry experience: what worked, what didn’t, and the trade-offs involved in real-world data workflows. Since data engineering often involves navigating competing approaches, we’ll also look at some of the pros and cons of different methods, but always with the different data user groups in mind.

Alex Kalinnikov: "Event-driven data streaming platform at VW Group"

Alex Kalinnikov is a Product Owner at CARIAD with over 10 years of experience in IT & Infrastructure. He will talk about how Cariad handles 180M telemetry messages per day with a modern data streaming architecture and how Cariad UDE Solution leverages Confluent Kafka, Apache Flink and Microsoft Azure to move terabytes of IoT data.

What to expect:

  • Two expert talks and Q&A
  • Networking opportunities in our great Creator Space
  • Some snacks & drinks :)

Timetable:

  • 18:30 - Event admission
  • 18:50 - Welcome & Introduction
  • 19:00 - Tom Kaltofen: "What Data Scientists Actually Need from Data Engineers: A ‘Data Producer’ Perspective"
  • 19:30 - 5 minutes break
  • 19:35 - Alex Kalinnikov: "Event-driven data streaming platform at VW Group"
  • 20:05 - Snacks, Drinks & Networking
  • 21:30 - End

More on the -> applydata data engineering meetup page.

Our goal is to form a local data-loving community, so join us and let's talk data together! --- At the event, sound, image and video recordings are created and published for documentation purposes as well as for the presentation of the event in publicly accessible media, on websites and blogs and for presentation on social media. By participating the event, the participant implicitly consents to the aforementioned photo and/or video recordings. Find more information on data protection here.

Data Engineering Meetup | Berlin, Oct 30th
AI needs platform engineers 2025-10-02 · 17:00

The Generative AI boom has led to an explosion of tools, artifacts, libraries, and models - all of which now need to be managed, updated, secured, and scaled. Platform engineering has a huge role to play here: let’s talk about it.

Organizations around the world are suddenly trying to figure out how to use, secure, deploy, and scale a new artifact - genAI models - alongside new and existing cloud platforms. In her role at Broadcom, Tasha Drew led a team adding AI platform capabilities to VMware Cloud Foundation.

Join Tasha as she discusses the challenges platform teams can help with in this rapidly moving and growing ecosystem, and open source tools and platforms teams can use to meet those challenges.

After a 45-minute talk there’ll be a 15-minute Q&A, for which we encourage you to submit questions in advance. A webinar recording and related materials will be shared with all attendees after the event.


Speaker: Tasha Drew - Director of Product Engineering, AI @ Broadcom

Tasha Drew is Senior Director at Broadcom, leading the AI and Advanced Services team in the CTO’s office. She focuses on integrating AI into VMware Cloud Foundation, improving developer productivity, and advancing private AI solutions. Tasha is also active in the open-source community, especially in Kubernetes.

AI needs platform engineers
AI needs platform engineers 2025-10-02 · 17:00

The Generative AI boom has led to an explosion of tools, artifacts, libraries, and models - all of which now need to be managed, updated, secured, and scaled. Platform engineering has a huge role to play here: let’s talk about it.

Organizations around the world are suddenly trying to figure out how to use, secure, deploy, and scale a new artifact - genAI models - alongside new and existing cloud platforms. In her role at Broadcom, Tasha Drew led a team adding AI platform capabilities to VMware Cloud Foundation.

Join Tasha as she discusses the challenges platform teams can help with in this rapidly moving and growing ecosystem, and open source tools and platforms teams can use to meet those challenges.

After a 45-minute talk there’ll be a 15-minute Q&A, for which we encourage you to submit questions in advance. A webinar recording and related materials will be shared with all attendees after the event.


Speaker: Tasha Drew - Director of Product Engineering, AI @ Broadcom

Tasha Drew is Senior Director at Broadcom, leading the AI and Advanced Services team in the CTO’s office. She focuses on integrating AI into VMware Cloud Foundation, improving developer productivity, and advancing private AI solutions. Tasha is also active in the open-source community, especially in Kubernetes.

AI needs platform engineers
Effie Baram – leader in foundational data engineering @ Two Sigma , Tobias Macey – host

Summary In this episode of the Data Engineering Podcast Effie Baram, a leader in foundational data engineering at Two Sigma, talks about the complexities and innovations in data engineering within the finance sector. She discusses the critical role of data at Two Sigma, balancing data quality with delivery speed, and the socio-technical challenges of building a foundational data platform that supports research and operational needs while maintaining regulatory compliance and data quality. Effie also shares insights into treating data as code, leveraging modern data warehouses, and the evolving role of data engineers in a rapidly changing technological landscape.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. This episode is brought to you by Coresignal, your go-to source for high-quality public web data to power best-in-class AI products. Instead of spending time collecting, cleaning, and enriching data in-house, use ready-made multi-source B2B data that can be smoothly integrated into your systems via APIs or as datasets. With over 3 billion data records from 15+ online sources, Coresignal delivers high-quality data on companies, employees, and jobs. It is powering decision-making for more than 700 companies across AI, investment, HR tech, sales tech, and market intelligence industries. A founding member of the Ethical Web Data Collection Initiative, Coresignal stands out not only for its data quality but also for its commitment to responsible data collection practices. Recognized as the top data provider by Datarade for two consecutive years, Coresignal is the go-to partner for those who need fresh, accurate, and ethically sourced B2B data at scale. Discover how Coresignal's data can enhance your AI platforms. Visit dataengineeringpodcast.com/coresignal to start your free 14-day trial. Your host is Tobias Macey and today I'm interviewing Effie Baram about data engineering in the finance sectorInterview IntroductionHow did you get involved in the area of data management?Can you start by outlining the role of data in the context of Two Sigma?What are some of the key characteristics of the types of data sources that you work with?Your role is leading "foundational data engineering" at Two Sigma. Can you unpack that title and how it shapes the ways that you think about what you build?How does the concept of "foundational data" influence the ways that the business thinks about the organizational patterns around data?Given the regulatory environment around finance, how does that impact the ways that you think about the "what" and "how" of the data that you deliver to data consumers?Being the foundational team for data use at Two Sigma, how have you approached the design and architecture of your technical systems?How do you think about the boundaries between your responsibilities and the rest of the organization?What are the design patterns that you have found most helpful in empowering data consumers to build on top of your work?What are some of the elements of sociotechnical friction that have been most challenging to address?What are the most interesting, innovative, or unexpected ways that you have seen the ideas around "foundational data" applied in your organization?What are the most interesting, unexpected, or challenging lessons that you have learned while working with financial data?When is a foundational data team the wrong approach?What do you have planned for the future of your platform design?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links 2SigmaReliability EngineeringSLA == Service-Level AgreementAirflowParquet File FormatBigQuerySnowflakedbtGemini AssistMCP == Model Context ProtocoldtraceThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

AI/ML API Data Collection Data Engineering Data Management Data Quality Datafold Python
Data Engineering Podcast

2 Days Hands-On Online Workshop: Azure AI Foundry and Copilot Studio Bootcamp Date: 29-30 May 2025, 9 AM to 5 PM Central Time Level: Beginners/Intermediate Registration Link: https://www.eventbrite.com/e/hands-on-azure-ai-foundry-and-copilot-studio-bootcamp-tickets-1267311596099?aff=oddtdtcreator Who Should Attend? This hands-on workshop is open to developers, senior software engineers, IT pros, architects, IT managers, citizen developers, technology product managers, IT leaders, enterprise architects, chief analytics officers, chief information officers, chief technology officers, and decision-makers interested in learning how AI Agents and Generative AI can help infuse artificial intelligence into next-generation apps and agents. Experience with C#, Python, or JavaScript is helpful but not required. You don't need prior knowledge of AI either. Although this isn't a data & analytics-focused workshop, data scientists, data stewards, and technically-minded data protection officers will also find it very valuable Description: With ChatGPT and other large language models, generative AI has captured the attention of global consumers, enterprises, and C-suite executives. AI has a significant role in the enterprise space and is evolving rapidly. Without understanding the concepts behind these advanced technologies, developers and administrators might find it challenging to assess the true impact of emerging tools and solutions. An AI agent is a powerful companion capable of managing a variety of interactions and tasks—from handling complex conversations to autonomously deciding the best actions based on instructions and context. Agents coordinate language models along with instructions, context, knowledge sources, topics, actions, inputs, and triggers to achieve your desired outcomes. Copilot Studio is a graphical, low-code tool designed for creating agents, including building automations with Power Automate and extending Microsoft 365 Copilot with your own enterprise data and scenarios. One standout feature of Copilot Studio is its ability to connect to other data sources through either prebuilt or custom plugins, as well as integration with Azure AI Foundry. This flexibility allows users to easily build sophisticated logic, ensuring that agent experiences are both powerful and intuitive. Azure AI Foundry is a unified AI platform that includes the Azure AI Foundry portal (formerly Azure AI Studio) and the Azure AI Foundry SDK—a unified SDK featuring pre-built app templates. This SDK gives developers easy access to popular models through a single interface, simplifies the integration of Azure AI into applications, and helps evaluate, debug, and improve application quality and safety throughout development, testing, and production. In this two-day virtual hands-on workshop, Microsoft AI and Business Applications MVP and Microsoft Certified Trainer, Prashant G Bhoyar, will cover these topics in detail:

  1. What are multimodal GenAI applications?
  2. What are AI agents?
  3. What are autonomous agents?
  4. What are custom Copilots?
  5. Introduction to Copilot Studio: Learn to create agents, build automations with Power Automate, and extend Microsoft 365 Copilot using enterprise data. Discover how to use prebuilt and custom plugins alongside Azure AI Foundry for powerful, intuitive agent experiences.
  6. Azure AI Foundry: An in-depth overview of Azure AI Foundry.
  7. Azure OpenAI Services: Explore these services, their architecture, and their role in the broader AI ecosystem.
  8. Using models from DeepSeek, Llama, Hugging Face, and other open-source models via Azure AI Foundry.
  9. Prompt engineering: An in-depth look at creating effective prompts, understanding their importance, and the factors influencing their performance.
  10. Use cases and common architectures: Hands-on labs demonstrating real-world implementations.
  11. How to evaluate use cases and determine ROI.
  12. Azure OpenAI Service embedding models.
  13. Customizing Azure OpenAI Services: Configuration to deployment tailored to specific business needs.
  14. Deep dive into Azure OpenAI Services: A detailed look at popular models like o1, GPT, Ada, and DALL-E, discussing their unique features and ideal use cases.
  15. Using Azure OpenAI Service to access company data.
  16. Azure AI Services overview: Language, Speech, and Vision services and their real-world applications.
  17. Conversational AI: Design, train, and refine AI capable of human-like interactions.
  18. Azure AI Search: Creating advanced search experiences.
  19. Document Intelligence Service: Extracting key-value pairs and table data from documents using machine learning.
  20. Azure AI Agent Service: Feature-rich managed capabilities combining models, data, tools, and services for automating complex business processes.
  21. Semantic Kernel: An open-source SDK for combining AI services (OpenAI, Azure OpenAI, Hugging Face) with programming languages like C# and Python to create advanced AI applications.
  22. Model Context Protocol ( MCP )
  23. Responsible AI: Ethics and responsible practices in AI use.
  24. Enterprise-level applications, Custom Copilots, and AI agents: Learn to develop scalable, multimodal applications using Copilot Studio and Azure AI Foundry, emphasizing industry requirements and best practices.

By the end of the workshop, you'll have practical experience building next-generation multimodal applications, Custom Copilots, and AI Agents using Copilot Studio and Azure AI Foundry. Workshop Resources: Access to Copilot Studio, Azure, and Azure OpenAI services (valued at USD 500) will be provided for hands-on labs, allowing you to build enterprise-grade multimodal applications and agents. However, you're encouraged to use your own Copilot Studio and Azure subscriptions if available. Attendee Workstation Requirements: You must bring your own computer (Windows or Mac) with:

  • Camera, speakers, microphone, and a reliable internet connection. Tablets will not work for this workshop.
  • A modern browser (Microsoft Edge, Google Chrome, Firefox, or Safari).
  • Access to www.azure.com and https://copilotstudio.microsoft.com.
  • Nice to have is the ability to run C# 10 or Python code, using Visual Studio 2022, VSCode 1.66+, Visual Studio for Mac, Rider, or similar IDE.
2-Day Hands-on Online Workshop: Azure AI Foundry and Copilot Studio Bootcamp
Gergely Orosz – host , Chip Huyen – computer scientist @ Stanford University

Supported by Our Partners • Swarmia — The engineering intelligence platform for modern software organizations. • Graphite — The AI developer productivity platform.  • Vanta — Automate compliance and simplify security with Vanta. — On today’s episode of The Pragmatic Engineer, I’m joined by Chip Huyen, a computer scientist, author of the freshly published O’Reilly book AI Engineering, and an expert in applied machine learning. Chip has worked as a researcher at Netflix, was a core developer at NVIDIA (building NeMo, NVIDIA’s GenAI framework), and co-founded Claypot AI. She also taught Machine Learning at Stanford University. In this conversation, we dive into the evolving field of AI Engineering and explore key insights from Chip’s book, including: • How AI Engineering differs from Machine Learning Engineering  • Why fine-tuning is usually not a tactic you’ll want (or need) to use • The spectrum of solutions to customer support problems – some not even involving AI! • The challenges of LLM evals (evaluations) • Why project-based learning is valuable—but even better when paired with structured learning • Exciting potential use cases for AI in education and entertainment • And more! — Timestamps (00:00) Intro  (01:31) A quick overview of AI Engineering (05:00) How Chip ensured her book stays current amidst the rapid advancements in AI (09:50) A definition of AI Engineering and how it differs from Machine Learning Engineering  (16:30) Simple first steps in building AI applications (22:53) An explanation of BM25 (retrieval system)  (23:43) The problems associated with fine-tuning  (27:55) Simple customer support solutions for rolling out AI thoughtfully  (33:44) Chip’s thoughts on staying focused on the problem  (35:19) The challenge in evaluating AI systems (38:18) Use cases in evaluating AI  (41:24) The importance of prioritizing users’ needs and experience  (46:24) Common mistakes made with Gen AI (52:12) A case for systematic problem solving  (53:13) Project-based learning vs. structured learning (58:32) Why AI is not the end of engineering (1:03:11) How AI is helping education and the future use cases we might see (1:07:13) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Applied AI Software Engineering: RAG https://newsletter.pragmaticengineer.com/p/rag  • How do AI software engineering agents work? https://newsletter.pragmaticengineer.com/p/ai-coding-agents  • AI Tooling for Software Engineers in 2024: Reality Check https://newsletter.pragmaticengineer.com/p/ai-tooling-2024  • IDEs with GenAI features that Software Engineers love https://newsletter.pragmaticengineer.com/p/ide-that-software-engineers-love — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

AI/ML GenAI LLM Marketing RAG Cyber Security
The Pragmatic Engineer
Max Beauchemin – Founder & CEO @ Preset , Tobias Macey – host

Summary In this episode of the Data Engineering Podcast the inimitable Max Beauchemin talks about reusability in data pipelines. The conversation explores the "write everything twice" problem, where similar pipelines are built without code reuse, and discusses the challenges of managing different SQL dialects and relational databases. Max also touches on the evolving role of data engineers, drawing parallels with front-end engineering, and suggests that generative AI could facilitate knowledge capture and distribution in data engineering. He encourages the community to share reference implementations and templates to foster collaboration and innovation, and expresses hopes for a future where code reuse becomes more prevalent.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm joined again by Max Beauchemin to talk about the challenges of reusability in data pipelinesInterview IntroductionHow did you get involved in the area of data management?Can you start by sharing your current thesis on the opportunities and shortcomings of code and component reusability in the data context?What are some ways that you think about what constitutes a "component" in this context?The data ecosystem has arguably grown more varied and nuanced in recent years. At the same time, the number and maturity of tools has grown. What is your view on the current trend in productivity for data teams and practitioners?What do you see as the core impediments to building more reusable and general-purpose solutions in data engineering?How can we balance the actual needs of data consumers against their requests (whether well- or un-informed) to help increase our ability to better design our workflows for reuse?In data engineering there are two broad approaches; code-focused or SQL-focused pipelines. In principle one would think that code-focused environments would have better composability. What are you seeing as the realities in your personal experience and what you hear from other teams?When it comes to SQL dialects, dbt offers the option of Jinja macros, whereas SDF and SQLMesh offer automatic translation. There are also tools like PRQL and Malloy that aim to abstract away the underlying SQL. What are the tradeoffs across those options that help or hinder the portability of transformation logic?Which layers of the data stack/steps in the data journey do you see the greatest opportunity for improving the creation of more broadly usable abstractions/reusable elements?low/no code systems for code reuseimpact of LLMs on reusability/compositionimpact of background on industry practices (e.g. DBAs, sysadmins, analysts vs. SWE, etc.)polymorphic data models (e.g. activity schema)What are the most interesting, innovative, or unexpected ways that you have seen teams address composability and reusability of data components?What are the most interesting, unexpected, or challenging lessons that you have learned while working on data-oriented tools and utilities?What are your hopes and predictions for sharing of code and logic in the future of data engineering?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links Max's Blog PostAirflowSupersetTableauLookerPowerBICohort AnalysisNextJSAirbytePodcast EpisodeFivetranPodcast EpisodeSegmentdbtSQLMeshPodcast EpisodeSparkLAMP StackPHPRelational AlgebraKnowledge GraphPython MarshmallowData Warehouse Lifecycle Toolkit (affiliate link)Entity Centric Data Modeling Blog PostAmplitudeOSACon presentationol-data-platform Tobias' team's data platform codeThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Activity Schema AI/ML Data Engineering Data Management Data Modelling Datafold dbt GenAI LLM Python RDBMS SQL SQLMesh
Data Engineering Podcast

Join us for an exciting meetup where we dive into the latest innovations in data infrastructure and AI! Discover how industry experts from H&M have empowered data teams with self-serve infrastructure, reusable components, and automation to boost efficiency. Plus, learn from Kyndryl’s Olga Arvidson on overcoming the challenges of scaling GenAI. This is the perfect opportunity to network with fellow data professionals, gain insights, and explore cutting-edge solutions that drive business success. Don’t miss out—sign up now for this free event, with food and drinks provided by Kyndryl!

Agenda: 17:30 - 18:00: Doors open 18:00 - 18:10: Welcome 18:10 - 18:40: Empowering Data Teams: Self-Serve Infrastructure, Reusable Components & Automation 18:40 - 19:10: Break 19:10 - 19:40: Struggling to scale GenAI. You are not alone! 19:40 - 20:30: Networking

Presentations:

Empowering Data Teams: Self-Serve Infrastructure, Reusable Components & Automation Mohinuddin Salahuddin & Rashidul Islam, H&M

In this session, we will explore how our organization has revolutionized its data landscape by implementing self-serve data infrastructure, automating processes using CI/CD pipelines, and building reusable components. Discover how these innovations have empowered our data teams to work more efficiently and independently, reducing bottlenecks and accelerating development cycles. We’ll delve into the tools and strategies that have enabled us to create a scalable, reliable, and agile data environment, ensuring high-quality data delivery and continuous improvement. Join us to learn how you can leverage these approaches to transform your own data operations and drive business success.

Speakers Bio: Mohinuddin Salahuddin is a tech professional with over 18 years of software development experience and 8 years in data engineering. He has successfully delivered solutions across various industries, blending technical expertise with a strong focus on business needs. Outside of work, he enjoys movies and spending time with his family, which fuels his creativity and passion for technology.

Rashidul Islam is a Product Manager at H&M building the next generation data platform to leverage the data for AI and analytics. His team is building a platform for enabling other teams to harness the data from different sources and make them AI and analytics friendly. He is passionate about making life easier for data engineers and analysts by providing improved developer experience.

Struggling to scale GenAI. You are not alone! Olga Arvidson - Customer Partner, Kyndryl

In this session Olga will give us a brief history of why gpt became big, and why there’s still a lot that needs to be done to get it adopted (with a bonus)

Speakers Bio: Olga Arvidson is a Customer Partner at Kyndryl, where she excels in driving customer success and fostering strong partnerships. Olga has worked with the biggest hyperscalers (Microsoft and Amazon Web Services) and has been instrumental in shaping clients navigate their digital transformation journeys. Covering the strategy to implementation life cycle and has a wide industry spread. She leads the retail segment in the region as well as focused on how data can shape strategy.

About the event: Tickets: Sign up required. Anyone who is not on the list will not get in. The event is free of charge. Capacity: Space is limited. If you are signed up but unable to attend, please change your RSVP 2 days before the event. Food and drinks: Food and drinks are sponsored by Kyndryl. Questions: Please contact the meetup organizers.

Code of Conduct

The NumFOCUS Code of Conduct applies to this event; please familiarize yourself with it before attending. If you have any questions or concerns regarding the Code of Conduct, please contact the organizers.

Empowering Data Teams & Scaling GenAI: A Meetup for Innovators
Event Big Data LDN 2024 2024-09-19
Guy Adams – CTO & Co-Founder - DataOps.live

Snowflake had a big challenge: How do you enable a team of 1,000 sales engineers and field CTOs to successfully deploy over 100 new data products per week and demonstrate every feature and capability in the Snowflake AI Data Cloud tailored to different customer needs?

In this session, Andrew Helgeson, Manager of Technology Platform Alliances at Snowflake, and Guy Adams, CTO at DataOps.live, will explain how Snowflake builds and deploys hundreds of data products using DataOps.live. Join us for a deep dive into Snowflake's innovative approach to automating complex data product deployment — and to learn how Snowflake Solutions Central revolutionizes solution discovery and deployment to drive customer success.

AI/ML Cloud Computing DataOps Snowflake
Guy Adams – CTO & Co-Founder - DataOps.live

Snowflake had a big challenge: How do you enable a team of 1,000 sales engineers and field CTOs to successfully deploy over 100 new data products per week and demonstrate every feature and capability in the Snowflake AI Data Cloud tailored to different customer needs?

In this session, Andrew Helgeson, Manager of Technology Platform Alliances at Snowflake, and Guy Adams, CTO at DataOps.live, will explain how Snowflake builds and deploys hundreds of data products using DataOps.live. Join us for a deep dive into Snowflake's innovative approach to automating complex data product deployment — and to learn how Snowflake Solutions Central revolutionizes solution discovery and deployment to drive customer success.

AI/ML Cloud Computing DataOps Snowflake
Peter Voss – guest @ Aigo , Tobias Macey – host

Summary Artificial intelligence has dominated the headlines for several months due to the successes of large language models. This has prompted numerous debates about the possibility of, and timeline for, artificial general intelligence (AGI). Peter Voss has dedicated decades of his life to the pursuit of truly intelligent software through the approach of cognitive AI. In this episode he explains his approach to building AI in a more human-like fashion and the emphasis on learning rather than statistical prediction. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementDagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.Your host is Tobias Macey and today I'm interviewing Peter Voss about what is involved in making your AI applications more "human"Interview IntroductionHow did you get involved in machine learning?Can you start by unpacking the idea of "human-like" AI? How does that contrast with the conception of "AGI"?The applications and limitations of GPT/LLM models have been dominating the popular conversation around AI. How do you see that impacting the overrall ecosystem of ML/AI applications and investment?The fundamental/foundational challenge of every AI use case is sourcing appropriate data. What are the strategies that you have found useful to acquire, evaluate, and prepare data at an appropriate scale to build high quality models? What are the opportunities and limitations of causal modeling techniques for generalized AI models?As AI systems gain more sophistication there is a challenge with establishing and maintaining trust. What are the risks involved in deploying more human-level AI systems and monitoring their reliability?What are the practical/architectural methods necessary to build more cognitive AI systems? How would you characterize the ecosystem of tools/frameworks available for creating, evolving, and maintaining these applications?What are the most interesting, innovative, or unexpected ways that you have seen cognitive AI applied?What are the most interesting, unexpected, or challenging lessons that you have learned while working on desiging/developing cognitive AI systems?When is cognitive AI the wrong choice?What do you have planned for the future of cognitive AI applications at Aigo?Contact Info LinkedInWebsiteParting Question From your perspective, what is the biggest barrier to adoption of machine learning today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.Links Aigo.aiArtificial General IntelligenceCognitive AIKnowledge GraphCausal ModelingBayesian StatisticsThinking Fast & Slow by Daniel Kahneman (affiliate link)Agent-Based ModelingReinforcement LearningDARPA 3 Waves of AI presentationWhy Don't We Have AGI Yet? whitepaperConcepts Is All You Need WhitepaperHellen KellerStephen HawkingThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0

AI/ML Analytics Cloud Computing Dagster Data Engineering Data Lake Data Lakehouse Delta Hudi Iceberg LLM Python Cyber Security SQL Trino
Tsavo Knott – Founder / Creator @ Pieces , Tobias Macey – host

Summary Generative AI promises to accelerate the productivity of human collaborators. Currently the primary way of working with these tools is through a conversational prompt, which is often cumbersome and unwieldy. In order to simplify the integration of AI capabilities into developer workflows Tsavo Knott helped create Pieces, a powerful collection of tools that complements the tools that developers already use. In this episode he explains the data collection and preparation process, the collection of model types and sizes that work together to power the experience, and how to incorporate it into your workflow to act as a second brain.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementDagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.Your host is Tobias Macey and today I'm interviewing Tsavo Knott about Pieces, a personal AI toolkit to improve the efficiency of developersInterview IntroductionHow did you get involved in machine learning?Can you describe what Pieces is and the story behind it?The past few months have seen an endless series of personalized AI tools launched. What are the features and focus of Pieces that might encourage someone to use it over the alternatives?model selectionsarchitecture of Pieces applicationlocal vs. hybrid vs. online modelsmodel update/delivery processdata preparation/serving for models in context of Pieces appapplication of AI to developer workflowstypes of workflows that people are building with piecesWhat are the most interesting, innovative, or unexpected ways that you have seen Pieces used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Pieces?When is Pieces the wrong choice?What do you have planned for the future of Pieces?Contact Info LinkedInParting Question From your perspective, what is the biggest barrier to adoption of machine learning today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.Links PiecesNPU == Neural Processing UnitTensor ChipLoRA == Low Rank AdaptationGenerative Adversarial NetworksMistralEmacsVimNeoVimDartFlutte

AI/ML Analytics Cloud Computing Dagster Data Collection Data Engineering Data Lake Data Lakehouse Delta GenAI Hudi Iceberg Python Cyber Security SQL Trino
Andrew Lee – guest @ Shortwave , Tobias Macey – host

Summary

Generative AI has rapidly transformed everything in the technology sector. When Andrew Lee started work on Shortwave he was focused on making email more productive. When AI started gaining adoption he realized that he had even more potential for a transformative experience. In this episode he shares the technical challenges that he and his team have overcome in integrating AI into their product, as well as the benefits and features that it provides to their customers.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Andrew Lee about his work on Shortwave, an AI powered email client

Interview

Introduction How did you get involved in the area of data management? Can you describe what Shortwave is and the story behind it?

What is the core problem that you are addressing with Shortwave?

Email has been a central part of communication and business productivity for decades now. What are the overall themes that continue to be problematic? What are the strengths that email maintains as a protocol and ecosystem? From a product perspective, what are the data challenges that are posed by email? Can you describe how you have architected the Shortwave platform?

How have the design and goals of the product changed since you started it? What are the ways that the advent and evolution of language models have influenced your product roadmap?

How do you manage the personalization of the AI functionality in your system for each user/team? For users and teams who are using Shortwave, how does it change their workflow and communication patterns? Can you describe how I would use Shortwave for managing the workflow of evaluating, planning, and promoting my podcast episodes? What are the most interesting, innovative, or unexpected ways that you have seen Shortwave used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Shortwave? When is Shortwave the wrong choice? What do you have planned for the future of Shortwave?

Contact Info

LinkedIn Blog

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with mach

AI/ML Analytics Cloud Computing Dagster Data Engineering Data Lake Data Lakehouse Data Management Delta GenAI Hudi Iceberg Python Cyber Security SQL Trino
Oren Eini – CEO and creator @ RavenDB , Tobias Macey – host

Summary

Databases come in a variety of formats for different use cases. The default association with the term "database" is relational engines, but non-relational engines are also used quite widely. In this episode Oren Eini, CEO and creator of RavenDB, explores the nuances of relational vs. non-relational engines, and the strategies for designing a non-relational database.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold's fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold. Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Oren Eini about the work of designing and building a NoSQL database engine

Interview

Introduction How did you get involved in the area of data management? Can you describe what constitutes a NoSQL database?

How have the requirements and applications of NoSQL engines changed since they first became popular ~15 years ago?

What are the factors that convince teams to use a NoSQL vs. SQL database?

NoSQL is a generalized term that encompasses a number of different data models. How does the underlying representation (e.g. document, K/V, graph) change that calculus?

How have the evolution in data formats (e.g. N-dimensional vectors, point clouds, etc.) changed the landscape for NoSQL engines? When designing and building a database, what are the initial set of questions that need to be answered?

How many "core capabilities" can you reasonably design around before they conflict with each other?

How have you approached the evolution of RavenDB as you add new capabilities and mature the project?

What are some of the early decisions that had to be unwound to enable new capabilities?

If you were to start from scratch today, what database would you build? What are the most interesting, innovative, or unexpected ways that you have seen RavenDB/NoSQL databases used? What are the most interesting, unexpected, or challenging lessons t

AI/ML Analytics Cloud Computing Dagster Data Engineering Data Lake Data Lakehouse Data Management Data Quality Datafold dbt Delta Hudi Iceberg NoSQL Cyber Security SQL Trino

Summary

Maintaining a single source of truth for your data is the biggest challenge in data engineering. Different roles and tasks in the business need their own ways to access and analyze the data in the organization. In order to enable this use case, while maintaining a single point of access, the semantic layer has evolved as a technological solution to the problem. In this episode Artyom Keydunov, creator of Cube, discusses the evolution and applications of the semantic layer as a component of your data platform, and how Cube provides speed and cost optimization for your data consumers.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold's fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold. Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Artyom Keydunov about the role of the semantic layer in your data platform

Interview

Introduction How did you get involved in the area of data management? Can you start by outlining the technical elements of what it means to have a "semantic layer"? In the past couple of years there was a rapid hype cycle around the "metrics layer" and "headless BI", which has largely faded. Can you give your assessment of the current state of the industry around the adoption/implementation of these concepts? What are the benefits of having a discrete service that offers the business metrics/semantic mappings as opposed to implementing those concepts as part of a more general system? (e.g. dbt, BI, warehouse marts, etc.)

At what point does it become necessary/beneficial for a team to adopt such a service? What are the challenges involved in retrofitting a semantic layer into a production data system?

evolution of requirements/usage patterns technical complexities/performance and cost optimization What are the most interesting, innovative, or unexpected ways that you have seen Cube used? What are the most interesting, unexpec

AI/ML Analytics BI Cloud Computing Dagster Data Engineering Data Lake Data Lakehouse Data Management Data Quality Datafold dbt Delta Hudi Iceberg Cyber Security SQL Trino

Join us for the upcoming PyData Amsterdam meetup that we host in collaboration with Adyen.

Schedule

18.00-19.00: Walk in with drinks and food (🍕 /🍺) 19.00-19.45: Fraud or no Fraud: sounds simple, right? 19.45-20:00: short break 20.00-20:45: Building GenAI and ML systems with OSS Metaflow 20.45-21.30: Networking + drinks and bites

[Talk 1]: Fraud or no Fraud: sounds simple\, right? by Sophie van den Berg The surge in online payments has brought a surge in fraudsters looking to exploit the system. To combat this, we're leveraging machine learning (ML) models to identify and block fraudulent transactions. While this may seem like a straightforward supervised learning task, there's a key challenge: how do we confirm if a blocked transaction was truly fraudulent? This talk delves into counterfactual evaluation and other obstacles encountered when building an ML model for fraud detection at Adyen.

[Talk 2]: Building GenAI and ML systems with OSS Metaflow by Hugo Bowne-Anderson This talk explores a framework for how data scientists can deliver value with Generative AI: How can you embed LLMs and foundation models into your pre-existing software stack? How can you do so using Open Source Python? What changes about the production machine learning stack and what remains the same?

We motivate the concepts through generative AI examples in domains such as text-to-image (Stable Diffusion) and text-to-speech (Whisper) applications. Moreover, we’ll demonstrate how workflow orchestration provides a common scaffolding to ensure that your Generative AI and classical Machine Learning workflows alike are robust and ready to move safely into production systems.

This talk is aimed squarely at (data) scientists and ML engineers who want to focus on the science, data, and modeling, but want to be able to access all their infrastructural, platform, and software needs with ease!

Combating online payment fraud & putting LLMs in open-source production systems
Maayan Salom – Founder @ Elementary , Tobias Macey – host

Summary

Working with data is a complicated process, with numerous chances for something to go wrong. Identifying and accounting for those errors is a critical piece of building trust in the organization that your data is accurate and up to date. While there are numerous products available to provide that visibility, they all have different technologies and workflows that they focus on. To bring observability to dbt projects the team at Elementary embedded themselves into the workflow. In this episode Maayan Salom explores the approach that she has taken to bring observability, enhanced testing capabilities, and anomaly detection into every step of the dbt developer experience.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free! This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold's fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold. Your host is Tobias Macey and today I'm interviewing Maayan Salom about how to incorporate observability into a dbt-oriented workflow and how Elementary can help

Interview

Introduction How did you get involved in the area of data management? Can you start by outlining what elements of observability are most relevant for dbt projects? What are some of the common ad-hoc/DIY methods that teams develop to acquire those insights?

What are the challenges/shortcomings associated with those approaches?

Over the past ~3 years there were numerous data observability systems/products created. What are some of the ways that the specifics of dbt workflows are not covered by those generalized tools?

What are the insights that can be more easily generated by embedding into the dbt toolchain and development cycle?

Can you describe what Elementary is and how it is designed to enhance the development and maintenance work in dbt projects? How is Elementary designed/implemented?

How have the scope and goals of the project changed since you started working on it? What are the engineering ch

AI/ML Analytics Cloud Computing Dagster Data Engineering Data Lake Data Lakehouse Data Management Data Quality Datafold dbt Delta Hudi Iceberg Cyber Security SQL Trino
Data Engineering Podcast

*** RSVP: https://www.aicamp.ai/event/eventdetails/W2024032710 (Due to limited room capacity, you must pre-register at the link for admission).

Welcome to the AI meetup in London. Join us for deep dive tech talks on AI, GenAI, LLMs and machine learning, food/drink, networking with speakers and fellow developers.

Agenda: * 6:00pm\~7:00pm: Checkin, Food/drink and Networking * 7:00pm\~9:00pm: Tech talks and Q&A * 9:00pm: Open discussion and Mixer

Tech Talk: Building GenAI and ML systems with OSS Metaflow Speaker: Hugo Bowne-Anderson (Outerbounds) Abstract: This talk explores a framework for how data scientists can deliver value with Generative AI: How can you embed LLMs and foundation models into your pre-existing software stack? How can you do so using Open Source Python? What changes about the production machine learning stack and what remains the same? This talk is aimed squarely at (data) scientists and ML engineers who want to focus on the science, data, and modeling, but want to be able to access all their infrastructural, platform, and software needs with ease!

Tech Talk: Harmony, Open source AI tool for psychology research Speaker: Thomas Wood (Fast Data Science) Abstract: In this talk, I will discuss AI for social sciences research and how to build a research tool with NLP and AI with open source tool Harmony, funded by Wellcome.

Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

Sponsors: We are actively seeking sponsors to support AI developers community. Whether it is by offering venue spaces, providing food, or cash sponsorship. Sponsors will not only speak at the meetups, receive prominent recognition, but also gain exposure to our extensive membership base of 10,000+ AI developers in London or 300K+ worldwide.

Community on Slack/Discord

  • Event chat: chat and connect with speakers and attendees
  • Sharing blogs, events, job openings, projects collaborations
  • Join Slack/Discord (link is at the bottom of the page) *
AI Meetup: ML and LLMs Infrastructure
Pete Hunt – CEO @ Dagster Labs , Tobias Macey – host

Summary

A core differentiator of Dagster in the ecosystem of data orchestration is their focus on software defined assets as a means of building declarative workflows. With their launch of Dagster+ as the redesigned commercial companion to the open source project they are investing in that capability with a suite of new features. In this episode Pete Hunt, CEO of Dagster labs, outlines these new capabilities, how they reduce the burden on data teams, and the increased collaboration that they enable across teams and business units.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Pete Hunt about how the launch of Dagster+ will level up your data platform and orchestrate across language platforms

Interview

Introduction How did you get involved in the area of data management? Can you describe what the focus of Dagster+ is and the story behind it?

What problems are you trying to solve with Dagster+? What are the notable enhancements beyond the Dagster Core project that this updated platform provides? How is it different from the current Dagster Cloud product?

In the launch announcement you tease new capabilities that would be great to explore in turns:

Make data a team sport, enabling data teams across the organization Deliver reliable, high quality data the organization can trust Observe and manage data platform costs Master the heterogeneous collection of technologies—both traditional and Modern Data Stack

What are the business/product goals that you are focused on improving with the launch of Dagster+ What are the most interesting, innovative, or unexpected ways that you have seen Dagster used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on the design and launch of Dagster+? When is Dagster+ the wrong choice? What do you have planned for the future of Dagster/Dagster Cloud/Dagster+?

Contact Info

Twitter LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If y

AI/ML Analytics Cloud Computing Dagster Data Engineering Data Lake Data Lakehouse Data Management Delta Hudi Iceberg Modern Data Stack Python Cyber Security SQL Trino
Gleb Mezhanskiy – guest @ Datafold , Tobias Macey – host

Summary

A significant portion of data workflows involve storing and processing information in database engines. Validating that the information is stored and processed correctly can be complex and time-consuming, especially when the source and destination speak different dialects of SQL. In this episode Gleb Mezhanskiy, founder and CEO of Datafold, discusses the different error conditions and solutions that you need to know about to ensure the accuracy of your data.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Join us at the top event for the global data community, Data Council Austin. From March 26-28th 2024, we'll play host to hundreds of attendees, 100 top speakers and dozens of startups that are advancing data science, engineering and AI. Data Council attendees are amazing founders, data scientists, lead engineers, CTOs, heads of data, investors and community organizers who are all working together to build the future of data and sharing their insights and learnings through deeply technical talks. As a listener to the Data Engineering Podcast you can get a special discount off regular priced and late bird tickets by using the promo code dataengpod20. Don't miss out on our only event this year! Visit dataengineeringpodcast.com/data-council and use code dataengpod20 to register today! Your host is Tobias Macey and today I'm welcoming back Gleb Mezhanskiy to talk about how to reconcile data in database environments

Interview

Introduction How did you get involved in the area of data management? Can you start by outlining some of the situations where reconciling data between databases is needed? What are examples of the error conditions that you are likely to run into when duplicating information between database engines?

When these errors do occur, what are some of the problems that they can cause?

When teams are replicating data between database engines, what are some of the common patterns for managing those flows?

How does that change between continual and one-time replication?

What are some of the steps involved in verifying the integrity of data replication between database engines? If the source or destination isn't a traditional database engine (e.g. data lakehouse) how does that change the work involved in verifying the success of the replication? What are the challenges of validating and reconciling data?

Sheer scale and cost of pulling data out, have to do in-place Performance. Pushing databases to the limit,

AI/ML Analytics Cloud Computing Dagster Data Engineering Data Lake Data Lakehouse Data Management Data Science Datafold Delta Hudi Iceberg Cyber Security SQL Trino