talk-data.com talk-data.com

Topic

Data Engineering

etl data_pipelines big_data

1127

tagged

Activity Trend

127 peak/qtr
2020-Q1 2026-Q1

Activities

1127 activities · Newest first

Summary In this episode of the Data Engineering Podcast Derek Collison, creator of NATS and CEO of Synadia, talks about the evolution and capabilities of NATS as a multi-paradigm connectivity layer for distributed applications. Derek discusses the challenges and solutions in building distributed systems, and highlights the unique features of NATS that differentiate it from other messaging systems. He delves into the architectural decisions behind NATS, including its ability to handle high-speed global microservices, support for edge computing, and integration with Jetstream for data persistence, and explores the role of NATS in modern data management and its use cases in industries like manufacturing and connected vehicles.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Derek Collison about NATS, a multi-paradigm connectivity layer for distributed applications.Interview IntroductionHow did you get involved in the area of data management?Can you describe what NATS is and the story behind it?How have your experiences in past roles (cloud foundry, TIBCO messaging systems) informed the core principles of NATS?What other sources of inspiration have you drawn on in the design and evolution of NATS? (e.g. Kafka, RabbitMQ, etc.)There are several patterns and abstractions that NATS can support, many of which overlap with other well-regarded technologies. When designing a system or service, what are the heuristics that should be used to determine whether NATS should act as a replacement or addition to those capabilities? (e.g. considerations of scale, speed, ecosystem compatibility, etc.)There is often a divide in the technologies and architecture used between operational/user-facing applications and data systems. How does the unification of multiple messaging patterns in NATS shift the ways that teams think about the relationship between these use cases?How does the shared communication layer of NATS with multiple protocol and pattern adaptaters reduce the need to replicate data and logic across application and data layers?Can you describe how the core NATS system is architected?How have the design and goals of NATS evolved since you first started working on it?In the time since you first began writing NATS (~2012) there have been several evolutionary stages in both application and data implementation patterns. How have those shifts influenced the direction of the NATS project and its ecosystem?For teams who have an existing architecture, what are some of the patterns for adoption of NATS that allow them to augment or migrate their capabilities?What are some of the ecosystem investments that you and your team have made to ease the adoption and integration of NATS?What are the most interesting, innovative, or unexpected ways that you have seen NATS used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on NATS?When is NATS the wrong choice?What do you have planned for the future of NATS?Contact Info GitHubLinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links NATSNATS JetStreamSynadiaCloud FoundryTIBCOApplied Physics Lab - Johns Hopkins UniversityCray SupercomputerRVCM Certified MessagingTIBCO ZMSIBM MQJMS == Java Message ServiceRabbitMQMongoDBNodeJSRedisAMQP == Advanced Message Queueing ProtocolPub/Sub PatternCircuit Breaker PatternZero MQAkamaiFastlyCDN == Content Delivery NetworkAt Most OnceAt Least OnceExactly OnceAWS KinesisMemcachedSQSSegmentRudderstackPodcast EpisodeDLQ == Dead Letter QueueMQTT == Message Queueing Telemetry TransportNATS Kafka Bridge10BaseT NetworkWeb AssemblyRedPandaPodcast EpisodePulsar FunctionsmTLSAuthZ (Authorization)AuthN (Authentication)NATS Auth CalloutsOPA == Open Policy AgentRAG == Retrieval Augmented GenerationAI Engineering Podcast EpisodeHome AssistantPodcast.init EpisodeTailscaleOllamaCDC == Change Data CapturegRPCThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Summary In this episode of the Data Engineering Podcast Viktor Kessler, co-founder of Vakmo, talks about the architectural patterns in the lake house enabled by a fast and feature-rich Iceberg catalog. Viktor shares his journey from data warehouses to developing the open-source project, Lakekeeper, an Apache Iceberg REST catalog written in Rust that facilitates building lake houses with essential components like storage, compute, and catalog management. He discusses the importance of metadata in making data actionable, the evolution of data catalogs, and the challenges and innovations in the space, including integration with OpenFGA for fine-grained access control and managing data across formats and compute engines.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Viktor Kessler about architectural patterns in the lakehouse that are unlocked by a fast and feature-rich Iceberg catalogInterview IntroductionHow did you get involved in the area of data management?Can you describe what LakeKeeper is and the story behind it? What is the core of the problem that you are addressing?There has been a lot of activity in the catalog space recently. What are the driving forces that have highlighted the need for a better metadata catalog in the data lake/distributed data ecosystem?How would you characterize the feature sets/problem spaces that different entrants are focused on addressing?Iceberg as a table format has gained a lot of attention and adoption across the data ecosystem. The REST catalog format has opened the door for numerous implementations. What are the opportunities for innovation and improving user experience in that space?What is the role of the catalog in managing security and governance? (AuthZ, auditing, etc.)What are the channels for propagating identity and permissions to compute engines? (how do you avoid head-scratching about permission denied situations)Can you describe how LakeKeeper is implemented?How have the design and goals of the project changed since you first started working on it?For someone who has an existing set of Iceberg tables and catalog, what does the migration process look like?What new workflows or capabilities does LakeKeeper enable for data teams using Iceberg tables across one or more compute frameworks?What are the most interesting, innovative, or unexpected ways that you have seen LakeKeeper used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on LakeKeeper?When is LakeKeeper the wrong choice?What do you have planned for the future of LakeKeeper?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links LakeKeeperSAPMicrosoft AccessMicrosoft ExcelApache IcebergPodcast EpisodeIceberg REST CatalogPyIcebergSparkTrinoDremioHive MetastoreHadoopNATSPolarsDuckDBPodcast EpisodeDataFusionAtlanPodcast EpisodeOpen MetadataPodcast EpisodeApache AtlasOpenFGAHudiPodcast EpisodeDelta LakePodcast EpisodeLance Table FormatPodcast EpisodeUnity CatalogPolaris CatalogApache GravitinoPodcast Episode KeycloakOpen Policy Agent (OPA)Apache RangerApache NiFiThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Comment passe-t-on de la théorie à la pratique en entreprise ? • Comment l’approche, la posture et la vision évoluent avec l’expérience ? • Les questions que tout junior se pose et ose rarement poser ? • La vision d'un Senior alias Willis Nana et de son retour d'expérience • Cas concrets et retours terrain pour progresser plus vite • Tips et erreurs à éviter en début de carrière

Data Engineering Design Patterns

Data projects are an intrinsic part of an organization's technical ecosystem, but data engineers in many companies continue to work on problems that others have already solved. This hands-on guide shows you how to provide valuable data by focusing on various aspects of data engineering, including data ingestion, data quality, idempotency, and more. Author Bartosz Konieczny guides you through the process of building reliable end-to-end data engineering projects, from data ingestion to data observability, focusing on data engineering design patterns that solve common business problems in a secure and storage-optimized manner. Each pattern includes a user-facing description of the problem, solutions, and consequences that place the pattern into the context of real-life scenarios. Throughout this journey, you'll use open source data tools and public cloud services to apply each pattern. You'll learn: Challenges data engineers face and their impact on data systems How these challenges relate to data system components Useful applications of data engineering patterns How to identify and fix issues with your current data components Technology-agnostic solutions to new and existing data projects, with open source implementation examples Bartosz Konieczny is a freelance data engineer who's been coding since 2010. He's held various senior hands-on positions that allowed him to work on many data engineering problems in batch and stream processing.

Summary In this episode of the Data Engineering Podcast Jeremy Edberg, CEO of DBOS, about durable execution and its impact on designing and implementing business logic for data systems. Jeremy explains how DBOS's serverless platform and orchestrator provide local resilience and reduce operational overhead, ensuring exactly-once execution in distributed systems through the use of the Transact library. He discusses the importance of version management in long-running workflows and how DBOS simplifies system design by reducing infrastructure needs like queues and CI pipelines, making it beneficial for data pipelines, AI workloads, and agentic AI.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Jeremy Edberg about durable execution and how it influences the design and implementation of business logicInterview IntroductionHow did you get involved in the area of data management?Can you describe what DBOS is and the story behind it?What is durable execution?What are some of the notable ways that inclusion of durable execution in an application architecture changes the ways that the rest of the application is implemented? (e.g. error handling, logic flow, etc.)Many data pipelines involve complex, multi-step workflows. How does DBOS simplify the creation and management of resilient data pipelines? How does durable execution impact the operational complexity of data management systems?One of the complexities in durable execution is managing code/data changes to workflows while existing executions are still processing. What are some of the useful patterns for addressing that challenge and how does DBOS help?Can you describe how DBOS is architected?How have the design and goals of the system changed since you first started working on it?What are the characteristics of Postgres that make it suitable for the persistence mechanism of DBOS?What are the guiding principles that you rely on to determine the boundaries between the open source and commercial elements of DBOS?What are the most interesting, innovative, or unexpected ways that you have seen DBOS used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on DBOS?When is DBOS the wrong choice?What do you have planned for the future of DBOS?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links DBOSExactly Once SemanticsTemporalSempahorePostgresDBOS TransactPython Typescript Idempotency KeysAgentic AIState MachineYugabyteDBPodcast EpisodeCockroachDBSupabaseNeonPodcast EpisodeAirflowThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Routine tasks such as data wrangling and pipeline maintenance often inhibit data teams from doing higher-value analysis and insights-led decision-making. This session showcases how intelligent data agents in BigQuery can help automate complex data engineering tasks. You’ll learn how to use natural language prompts to streamline data engineering tasks from ingestion and transformation, such as data cleaning, formatting, and loading results into BigQuery tables that accelerate the time to build and validate data pipelines.

Routine tasks such as data wrangling and pipeline maintenance often inhibit data teams from doing higher-value analysis and insights-led decision-making. This session showcases how intelligent data agents in BigQuery can help automate complex data engineering tasks. You’ll learn how to use natural language prompts to streamline data engineering tasks from ingestion and transformation, such as data cleaning, formatting, and loading results into BigQuery tables that accelerate the time to build and validate data pipelines.

Juhani Vanhatapio lives in Finland at the Arctic Circle and is studying data engineering and machine learning. Juhani shares stories about life in the Arctic, the challenges and fun of guiding tours to see the Northern Lights, his AI Assistant for Northern Lights tourism, and his journey into the tech field during the COVID pandemic. This is definitely a very interesting and left-field conversation you'll enjoy.

In this podcast episode, we talked with Eddy Zulkifly about From Supply Chain Management to Digital Warehousing and FinOps

About the Speaker: Eddy Zulkifly is a Staff Data Engineer at Kinaxis, building robust data platforms across Google Cloud, Azure, and AWS. With a decade of experience in data, he actively shares his expertise as a Mentor on ADPList and Teaching Assistant at Uplimit. Previously, he was a Senior Data Engineer at Home Depot, specializing in e-commerce and supply chain analytics. Currently pursuing a Master’s in Analytics at the Georgia Institute of Technology, Eddy is also passionate about open-source data projects and enjoys watching/exploring the analytics behind the Fantasy Premier League.

In this episode, we dive into the world of data engineering and FinOps with Eddy Zulkifly, Staff Data Engineer at Kinaxis. Eddy shares his unconventional career journey—from optimizing physical warehouses with Excel to building digital data platforms in the cloud.

🕒 TIMECODES 0:00 Eddy’s career journey: From supply chain to data engineering 8:18 Tools & learning: Excel, Docker, and transitioning to data engineering 21:57 Physical vs. digital warehousing: Analogies and key differences 31:40 Introduction to FinOps: Cloud cost optimization and vendor negotiations 40:18 Resources for FinOps: Certifications and the FinOps Foundation 45:12 Standardizing cloud cost reporting across AWS/GCP/Azure 50:04 Eddy’s master’s degree and closing thoughts

🔗 CONNECT WITH EDDY Twitter - https://x.com/eddarief Linkedin - https://www.linkedin.com/in/eddyzulkifly/ Github: https://github.com/eyzyly/eyzyly ADPList: https://adplist.org/mentors/eddy-zulkifly

🔗 CONNECT WITH DataTalksClub Join the community - https://datatalks.club/slack.html Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ

Check other upcoming events - https://lu.ma/dtc-events LinkedIn - https://www.linkedin.com/company/datatalks-club/ Twitter - https://twitter.com/DataTalksClub Website - https://datatalks.club/

This is a free preview of a paid episode. To hear more, visit dataengineeringcentral.substack.com

It’s time for another episode of the Data Engineering Central Podcast. In this episode, we cover … * Rust-based tool called UV to replace pip and poetry etc * Apache X-Table and the Future of the Lake House * How is AI going to affect you? Thanks for being a consumer of Data Engineering Central; your support means a lot. Please share this podcast with your friend…

Wayfair’s Multi-year Data Mesh Journey | Nachiket Mehta and Piyush Tiwari | Shift Left Data Confe...

Wayfair’s Multi-year Data Mesh Journey | Nachiket Mehta and Piyush Tiwari | Shift Left Data Conference 2025

Wayfair’s multi-year Data Mesh journey involved shifting from a monolithic, centralized data model to a decentralized, domain-driven architecture built on microservices. By embracing Data Mesh principles, Wayfair empowered domain teams to take end-to-end ownership of their data.

Key enablers included a data contract management platform ensure trusted, discoverable data products, and the development of Taxon, an internal ontology and knowledge graph that unified semantics across domains while supporting the company's tech modernization.

Organizationally, Wayfair introduced an Embedded Data Engineering model – embedding data engineers within domain teams – to instill a “Data-as-a-Product” mindset among data producers. This sociotechnical shift ensured that those who create data also own its quality, documentation, and evolution, rather than relying on a centralized BI team. As a result, Wayfair’s data producers are now accountable for well-defined, high-quality data products, and data consumers can more easily discover and trust data through the unified catalog and ontology.

The presentation will highlight how Wayfair has adopted the “shift left” (pushing data ownership and quality to the source teams) and next heading towards “shift right” (focusing on consumer-driven data products and outcomes) to unlock business outcomes. This session will share both technical strategies and business results from Wayfair’s Data Mesh journey.

The role of data and AI engineers is more critical than ever. With organizations collecting massive amounts of data, the challenge lies in building efficient data infrastructures that can support AI systems and deliver actionable insights. But what does it take to become a successful data or AI engineer? How do you navigate the complex landscape of data tools and technologies? And what are the key skills and strategies needed to excel in this field?  Deepak Goyal is a globally recognized authority in Cloud Data Engineering and AI. As the Founder & CEO of Azurelib Academy, he has built a trusted platform for advanced cloud education, empowering over 100,000 professionals and influencing data strategies across Fortune 500 companies. With over 17 years of leadership experience, Deepak has been at the forefront of designing and implementing scalable, real-world data solutions using cutting-edge technologies like Microsoft Azure, Databricks, and Generative AI. In the episode, Richie and Deepak explore the fundamentals of data engineering, the critical skills needed, the intersection with AI roles, career paths, and essential soft skills. They also discuss the hiring process, interview tips, and the importance of continuous learning in a rapidly evolving field, and much more. Links Mentioned in the Show: AzureLibAzureLib Academy Connect with DeepakGet Certified! Azure FundamentalsRelated Episode: Effective Data Engineering with Liya Aizenberg, Director of Data Engineering at AwaySign up to attend RADAR: Skills Edition  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Summary In this episode of the Data Engineering Podcast Roman Gershman, CTO and founder of Dragonfly DB, explores the development and impact of high-speed in-memory databases. Roman shares his experience creating a more efficient alternative to Redis, focusing on performance gains, scalability, and cost efficiency, while addressing limitations such as high throughput and low latency scenarios. He explains how Dragonfly DB solves operational complexities for users and delves into its technical aspects, including maintaining compatibility with Redis while innovating on memory efficiency. Roman discusses the importance of cost efficiency and operational simplicity in driving adoption and shares insights on the broader ecosystem of in-memory data stores, future directions like SSD tiering and vector search capabilities, and the lessons learned from building a new database engine.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Roman Gershman about building a high-speed in-memory database and the impact of the performance gains on data applicationsInterview IntroductionHow did you get involved in the area of data management?Can you describe what DragonflyDB is and the story behind it?What is the core problem/use case that is solved by making a "faster Redis"?The other major player in the high performance key/value database space is Aerospike. What are the heuristics that an engineer should use to determine whether to use that vs. Dragonfly/Redis?Common use cases for Redis involve application caches and queueing (e.g. Celery/RQ). What are some of the other applications that you have seen Redis/Dragonfly used for, particularly in data engineering use cases?There is a piece of tribal wisdom that it takes 10 years for a database to iron out all of the kinks. At the same time, there have been substantial investments in commoditizing the underlying components of database engines. Can you describe how you approached the implementation of DragonflyDB to arive at a functional and reliable implementation?What are the architectural elements that contribute to the performance and scalability benefits of Dragonfly?How have the design and goals of the system changed since you first started working on it?For teams who migrate from Redis to Dragonfly, beyond the cost savings what are some of the ways that it changes the ways that they think about their overall system design?What are the most interesting, innovative, or unexpected ways that you have seen Dragonfly used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on DragonflyDB?When is DragonflyDB the wrong choice?What do you have planned for the future of DragonflyDB?Contact Info GitHubLinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links DragonflyDBRedisElasticacheValKeyAerospikeLaravelSidekiqCelerySeastar FrameworkShared-Nothing Architectureio_uringmidi-redisDunning-Kruger EffectRustThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Time Series Analysis with Spark

Time Series Analysis with Spark provides a practical introduction to leveraging Apache Spark and Databricks for time series analysis. You'll learn to prepare, model, and deploy robust and scalable time series solutions for real-world applications. From data preparation to advanced generative AI techniques, this guide prepares you to excel in big data analytics. What this Book will help me do Understand the core concepts and architectures of Apache Spark for time series analysis. Learn to clean, organize, and prepare time series data for big data environments. Gain expertise in choosing, building, and training various time series models tailored to specific projects. Master techniques to scale your models in production using Spark and Databricks. Explore the integration of advanced technologies such as generative AI to enhance predictions and derive insights. Author(s) Yoni Ramaswami, a Senior Solutions Architect at Databricks, has extensive experience in data engineering and AI solutions. With a focus on creating innovative big data and AI strategies across industries, Yoni authored this book to empower professionals to efficiently handle time series data. Yoni's approachable style ensures that both foundational concepts and advanced techniques are accessible to readers. Who is it for? This book is ideal for data engineers, machine learning engineers, data scientists, and analysts interested in enhancing their expertise in time series analysis using Apache Spark and Databricks. Whether you're new to time series or looking to refine your skills, you'll find both foundational insights and advanced practices explained clearly. A basic understanding of Spark is helpful but not required.

Summary In this episode of the Data Engineering Podcast Sean Knapp, CEO of Ascend.io, explores the intersection of AI and data engineering. He discusses the evolution of data engineering and the role of AI in automating processes, alleviating burdens on data engineers, and enabling them to focus on complex tasks and innovation. The conversation covers the challenges and opportunities presented by AI, including the need for intelligent tooling and its potential to streamline data engineering processes. Sean and Tobias also delve into the impact of generative AI on data engineering, highlighting its ability to accelerate development, improve governance, and enhance productivity, while also noting the current limitations and future potential of AI in the field.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Sean Knapp about how Ascend is incorporating AI into their platform to help you keep up with the rapid rate of changeInterview IntroductionHow did you get involved in the area of data management?Can you describe what Ascend is and the story behind it?The last time we spoke was August of 2022. What are the most notable or interesting evolutions in your platform since then?In that same time "AI" has taken up all of the oxygen in the data ecosystem. How has that impacted the ways that you and your customers think about their priorities?The introduction of AI as an API has caused many organizations to try and leap-frog their data maturity journey and jump straight to building with advanced capabilities. How is that impacting the pressures and priorities felt by data teams?At the same time that AI-focused product goals are straining data teams capacities, AI also has the potential to act as an accelerator to their work. What are the roadblocks/speedbumps that are in the way of that capability?Many data teams are incorporating AI tools into parts of their workflow, but it can be clunky and cumbersome. How are you thinking about the fundamental changes in how your platform works with AI at its center?Can you describe the technical architecture that you have evolved toward that allows for AI to drive the experience rather than being a bolt-on?What are the concrete impacts that these new capabilities have on teams who are using Ascend?What are the most interesting, innovative, or unexpected ways that you have seen Ascend + AI used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on incorporating AI into the core of Ascend?When is Ascend the wrong choice?What do you have planned for the future of AI in Ascend?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links AscendCursor AI Code EditorDevinGitHub CopilotOpenAI DeepResearchS3 TablesAWS GlueAWS BedrockSnowparkCo-Intelligence: Living and Working with AI by Ethan Mollick (affiliate link)OpenAI o3The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Build Bigger With Small Ai: Running Small Models Locally

It's finally possible to bring the awesome power of Large Language Models (LLMs) to your laptop. This talk will explore how to run and leverage small, openly available LLMs to power common tasks involving data, including selecting the right models, practical use cases for running small models, and best practices for deploying small models effectively alongside databases.

Bio: Jeffrey Morgan is the founder of Ollama, an open-source tool to get up and run large language models. Prior to founding Ollama, Jeffrey founded Kitematic, which was acquired by Docker and evolved into Docker Desktop. He has previously worked at companies including Docker, Twitter, and Google.

➡️ Follow Us LinkedIn: https://www.linkedin.com/company/small-data-sf/ X/Twitter : https://twitter.com/smalldatasf Website: https://www.smalldatasf.com/

Discover how to run large language models (LLMs) locally using Ollama, the easiest way to get started with small AI models on your Mac, Windows, or Linux machine. Unlike massive cloud-based systems, small open source models are only a few gigabytes, allowing them to run incredibly fast on consumer hardware without network latency. This video explains why these local LLMs are not just scaled-down versions of larger models but powerful tools for developers, offering significant advantages in speed, data privacy, and cost-effectiveness by eliminating hidden cloud provider fees and risks.

Learn the most common use case for small models: combining them with your existing factual data to prevent hallucinations. We dive into retrieval augmented generation (RAG), a powerful technique where you augment a model's prompt with information from a local data source. See a practical demo of how to build a vector store from simple text files and connect it to a model like Gemma 2B, enabling you to query your own data using natural language for fast, accurate, and context-aware responses.

Explore the next frontier of local AI with small agents and tool calling, a new feature that empowers models to interact with external tools. This guide demonstrates how an LLM can autonomously decide to query a DuckDB database, write the correct SQL, and use the retrieved data to answer your questions. This advanced tutorial shows you how to connect small models directly to your data engineering workflows, moving beyond simple chat to create intelligent, data-driven applications.

Get started with practical applications for small models today, from building internal help desks to streamlining engineering tasks like code review. This video highlights how small and large models can work together effectively and shows that open source models are rapidly catching up to their cloud-scale counterparts. It's never been a better time for developers and data analysts to harness the power of local AI.

In this podcast episode, we talked with Bartosz Mikulski about Data Intensive AI.

About the Speaker: Bartosz is an AI and data engineer. He specializes in moving AI projects from the good-enough-for-a-demo phase to production by building a testing infrastructure and fixing the issues detected by tests. On top of that, he teaches programmers and non-programmers how to use AI. He contributed one chapter to the book 97 Things Every Data Engineer Should Know, and he was a speaker at several conferences, including Data Natives, Berlin Buzzwords, and Global AI Developer Days. 

In this episode, we discuss Bartosz’s career journey, the importance of testing in data pipelines, and how AI tools like ChatGPT and Cursor are transforming development workflows. From prompt engineering to building Chrome extensions with AI, we dive into practical use cases, tools, and insights for anyone working in data-intensive AI projects. Whether you’re a data engineer, AI enthusiast, or just curious about the future of AI in tech, this episode offers valuable takeaways and real-world experiences.

0:00 Introduction to Bartosz and his background 4:00 Bartosz’s career journey from Java development to AI engineering 9:05 The importance of testing in data engineering 11:19 How to create tests for data pipelines 13:14 Tools and approaches for testing data pipelines 17:10 Choosing Spark for data engineering projects 19:05 The connection between data engineering and AI tools 21:39 Use cases of AI in data engineering and MLOps 25:13 Prompt engineering techniques and best practices 31:45 Prompt compression and caching in AI models 33:35 Thoughts on DeepSeek and open-source AI models 35:54 Using AI for lead classification and LinkedIn automation 41:04 Building Chrome extensions with AI integration 43:51 Comparing Cursor and GitHub Copilot for coding 47:11 Using ChatGPT and Perplexity for AI-assisted tasks 52:09 Hosting static websites and using AI for development 54:27 How blogging helps attract clients and share knowledge 58:15 Using AI to assist with writing and content creation

🔗 CONNECT WITH Bartosz LinkedIn: https://www.linkedin.com/in/mikulskibartosz/ Github: https://github.com/mikulskibartosz Website: https://mikulskibartosz.name/blog/

🔗 CONNECT WITH DataTalksClub Join the community - https://datatalks.club/slack.html Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ Check other upcoming events - https://lu.ma/dtc-events LinkedIn - https://www.linkedin.com/company/datatalks-club/ Twitter - https://twitter.com/DataTalksClub Website - https://datatalks.club/