TBA
talk-data.com
Topic
flink
29
tagged
Activity Trend
Top Events
Explore the power of real-time streaming with GenAI using Apache NiFi. Learn how NiFi simplifies data engineering workflows, allowing you to focus on creativity over technical complexities. Tim Spann will guide you through practical examples, showcasing NiFi's automation impact from ingestion to delivery. Whether you're a seasoned data engineer or new to GenAI, this talk offers valuable insights into optimizing workflows. Join us to unlock the potential of real-time streaming and witness how NiFi makes data engineering a breeze for GenAI applications!
Abstract: You've been tasked with implementing a data streaming pipeline for propagating data changes from your operational Postgres database to a search index in OpenSearch. Data views in OS should be denormalized for fast querying, and of course there should be no noticeable impact on the production database. In this session we'll discuss how to build this data pipeline using two popular open-source projects: Debezium for log-based change data capture (CDC) and Apache Flink for stream processing. Join us for this talk and learn about: * Setting up change data streams with Debezium * Efficiently building nested data structures from 1:n joins * Deployment options: Kafka Connect vs. Flink CDC
Que ce soit pour vos besoins retail, e-commerce ou même gaming, il n’a jamais été aussi facile de pousser vos données “temps réels” dans votre Data Warehouse. Nous allons, en live coding et à l’aide de la plateforme Aiven, monter une stack de data streaming Open Source en moins de 30 minutes avec la gestion des logs et du monitoring ainsi que ses connecteurs vers les services Google.
In this session, we will explore the stream processing capabilities for Kafka and compare the three popular options: Kafka Streams, ksqlDB, and Apache Flink®. We will dive into the strengths and limitations of each technology, and compare them based on their ease of use, performance, scalability, and flexibility. By the end of the session, attendees will have a better understanding of the different options available for stream processing with Kafka, and which technology might be the best fit for their specific use case. This session is ideal for developers, data engineers, and architects who want to leverage the power of Kafka for real-time data processing.
In this session, we will explore the stream processing capabilities for Kafka and compare the three popular options: Kafka Streams, ksqlDB, and Apache Flink®. We will dive into the strengths and limitations of each technology, and compare them based on their ease of use, performance, scalability, and flexibility. By the end of the session, attendees will have a better understanding of the different options available for stream processing with Kafka, and which technology might be the best fit for their specific use case. This session is ideal for developers, data engineers, and architects who want to leverage the power of Kafka for real-time data processing.
Bio:
Before Jan Svoboda started his Apache Kafka journey at Confluent, he worked as an Advisory Platform Architect at Pivotal and DevOps Solutions Architect at IBM, among others. Jan joined Confluent in April 2020 as a Solutions Engineer, establishing microservices development as his favourite topic. Jan holds degrees in Management of Information Systems from UNYP and Computer Science from UCF.
In this session, David will demystify the misconceptions around the complexity of Apache Flink, touch on its use cases, and get you up to speed for your stream processing endeavor. All of that, in real-time.
In this session, David will demystify the misconceptions around the complexity of Apache Flink, touch on its use cases, and get you up to speed for your stream processing endeavor. All of that, in real-time.
Gain competitive advantage by adopting the best practices of established companies such as FedEx and CarMax, who successfully transformed their practices around people, processes, technology, internal partnerships and external networks.