talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (3 results)

Companies (1 result)

Fine 1 speaker
CTO
Showing 5 results

Activities & events

Title & Speakers Event

***IMPORTANT: IF YOU RSVP here you don't need to also RSVP to London Kafka Group.***

Date and Time: 🗓️ Wednesday 7th May, ⏰ 18:00 - 21:00 PM 🕘

Venue: Snowflake, One Crown Place, London EC2A 4EF, U.K. 5th & 6th floors · London

Schedule:

  • 6:00pm: Doors open
  • 6:00pm – 6:30pm: Food/Drinks and networking
  • 6:30pm - 7:00pm: Mastering real-time anomaly detection
  • 7:00pm - 7:30pm: Iced Kaf-fee: Chilling Kafka Data into Iceberg Tables
  • 7:30pm - 8:00pm: Observing all the things: Apache Kafka® and Apache Flink® with OpenTelemetry
  • 8:30pm - 9:00pm: Additional Q&A and Networking

🎙️ \~Talk 1\~ Mastering real-time anomaly detection, Olena Kutsenko, Staff Developer Advocate, Confluent

Abstract: Detecting problems as they happen is essential in today's fast-moving world. This talk shows how to build a simple, powerful system for real-time anomaly detection in live data. We'll use Apache Kafka for streaming data, Apache Flink for processing it in real time, and various models to detect unusual patterns. Whether it's monitoring systems, or tracking IoT devices, this solution is flexible and reliable.

We'll start by exploring how Kafka helps collect and manage fast-moving data streams. Then, we'll demonstrate how Flink processes this data in real time and integrates anomaly detection models to uncover events as they occur. We'll dive into the details of how ARIMA and LSTM work, so even if you’re not into mathematics, you can still understand what happens behind the scenes!

This talk is ideal for anyone looking to monitor anomalies in real-time data streams.

🗣️ Speaker 1: Olena is a Staff Developer Advocate at Confluent and a recognized expert in data streaming and analytics. With two decades of experience in software engineering, she has built mission-critical applications, led high-performing teams, and driven large-scale technology adoption at industry leaders like Nokia, HERE Technologies, AWS, and Aiven.

🎙️ \~Talk 2\~ Iced Kaf-fee: Chilling Kafka Data into Iceberg Tables, Danica Fine, Lead Developer Advocate, Open Source at Snowflake

Abstract: Have piping-hot, real-time data in Apache Kafka® but want to chill it down into Apache Iceberg™ tables? Let’s see how we can craft the perfect cup of “Iced Kaf-fee” for you and your needs!

We’ll start by grinding through the motivation for moving data from Kafka topics into Iceberg tables, exploring the benefits that doing so has to offer your analytics workflows. From there, we’ll open up the menu of options available to cool down your streams, including Apache Flink®, Apache Spark™, and Kafka Connect. Each brewing method has its own recipe, so we’ll compare their pros and cons, walk through use cases for each, and highlight when you might prefer a strong Spark roast over a smooth Flink blend—or maybe a Connect cold brew. Plus, we’ll share a sneak peek at future innovations that are percolating in the community to make sinking your Kafka data into Iceberg even easier.

By the end of the session, you’ll have everything you need to whip up the perfect pipeline and serve up your “Iced Kaf-fee” with confidence.

🗣️ Speaker 2: Danica began her career as a software engineer in financial services and pivoted to developer relations, where she focussed primarily on open source technologies under the Apache Software Foundation umbrella such as Apache Kafka and Apache Flink. She now leads the open source advocacy efforts at Snowflake, supporting Apache Iceberg and Apache Polaris (incubating).

🎙️ \~Talk 3\~ Observing all the things: Apache Kafka® and Apache Flink® with OpenTelemetry, Mehreen Tahir Software Engineer, New Relic

🗣️ Speaker 3: Mehreen specializes in machine learning, data science, and artificial intelligence. Mehreen is passionate about observability and the use of telemetry data to improve application performance. She actively contributes to developer communities and has a keen interest in edge analytics and serverless architecture.

*** DISCLAIMER NOTE: We are unable to cater for any attendees under the age of 18. If you would like to speak or host our next event please let us know! [email protected]

IN PERSON: Apache Kafka® x Apache Iceberg x Apache Flink®

Join us for an a range of talks including Kafka to Apache Iceberg in London hosted by Snowflake!

Date and Time: 🗓️ Wednesday 7th May, ⏰ 18:00 - 21:00 PM 🕘 Venue: Snowflake, One Crown Place, London EC2A 4EF, U.K. 5th & 6th floors · London

Schedule:

  • 6:00pm: Doors open
  • 6:00pm – 6:30pm: Food/Drinks and networking
  • 6:30pm - 7:00pm: Mastering real-time anomaly detection
  • 7:00pm - 7:30pm: Iced Kaf-fee: Chilling Kafka Data into Iceberg Tables
  • 7:30pm - 8:00pm: Observing all the things: Apache Kafka® and Apache Flink® with OpenTelemetry
  • 8:30pm - 9:00pm: Additional Q&A and Networking

🎙️ \~Talk 1\~ Mastering real-time anomaly detection, Olena Kutsenko, Staff Developer Advocate, Confluent

Abstract: Detecting problems as they happen is essential in today's fast-moving world. This talk shows how to build a simple, powerful system for real-time anomaly detection in live data. We'll use Apache Kafka for streaming data, Apache Flink for processing it in real time, and various models to detect unusual patterns. Whether it's monitoring systems, or tracking IoT devices, this solution is flexible and reliable. We'll start by exploring how Kafka helps collect and manage fast-moving data streams. Then, we'll demonstrate how Flink processes this data in real time and integrates anomaly detection models to uncover events as they occur. We'll dive into the details of how ARIMA and LSTM work, so even if you’re not into mathematics, you can still understand what happens behind the scenes!

This talk is ideal for anyone looking to monitor anomalies in real-time data streams.

🗣️ Speaker 1: Olena is a Staff Developer Advocate at Confluent and a recognized expert in data streaming and analytics. With two decades of experience in software engineering, she has built mission-critical applications, led high-performing teams, and driven large-scale technology adoption at industry leaders like Nokia, HERE Technologies, AWS, and Aiven.

🎙️ \~Talk 2\~ Iced Kaf-fee: Chilling Kafka Data into Iceberg Tables, Danica Fine, Lead Developer Advocate, Open Source at Snowflake

Abstract: Have piping-hot, real-time data in Apache Kafka® but want to chill it down into Apache Iceberg™ tables? Let’s see how we can craft the perfect cup of “Iced Kaf-fee” for you and your needs!

We’ll start by grinding through the motivation for moving data from Kafka topics into Iceberg tables, exploring the benefits that doing so has to offer your analytics workflows. From there, we’ll open up the menu of options available to cool down your streams, including Apache Flink®, Apache Spark™, and Kafka Connect. Each brewing method has its own recipe, so we’ll compare their pros and cons, walk through use cases for each, and highlight when you might prefer a strong Spark roast over a smooth Flink blend—or maybe a Connect cold brew. Plus, we’ll share a sneak peek at future innovations that are percolating in the community to make sinking your Kafka data into Iceberg even easier.

By the end of the session, you’ll have everything you need to whip up the perfect pipeline and serve up your “Iced Kaf-fee” with confidence.

🗣️ Speaker 2: Danica began her career as a software engineer in financial services and pivoted to developer relations, where she focussed primarily on open source technologies under the Apache Software Foundation umbrella such as Apache Kafka and Apache Flink. She now leads the open source advocacy efforts at Snowflake, supporting Apache Iceberg and Apache Polaris (incubating).

🎙️ \~Talk 3\~ Observing all the things: Apache Kafka® and Apache Flink® with OpenTelemetry, Mehreen Tahir Software Engineer, New Relic

🗣️ Speaker 3: Mehreen specializes in machine learning, data science, and artificial intelligence. Mehreen is passionate about observability and the use of telemetry data to improve application performance. She actively contributes to developer communities and has a keen interest in edge analytics and serverless architecture.

IN PERSON: Apache Kafka to Apache Iceberg examples by Snowflake

Join us for this limited event with the opportunity to learn from two experts. Please note there is only 70 spaces and entry will be strictly limited to first registered.

Date and Time: 🗓️ Wednesday 8th May, ⏰ 5:00 PM - 8:30 PM 🕘

Venue: Confluent Europe Ltd, 1 Bedford St WC2E 9HG

Attending Brands: OSO, RedHat, Confluent

Networking: Pizza + Beer at the end

Schedule: 6:00pm: Doors Open 6:00pm - 6:30pm: Food, drinks, networking 6:30pm - 7:00pm: Kate Stanley, Principal Software Engineer, Red Hat 7:00pm - 7:30pm: Danica Fine, Staff Developer Advocate, Confluent 7:30pm-8:30pm: Additional Q&A, Networking

🎙️ \~Talk 1\~ Talk Title: Connecting the dots: Administering Kafka Connect

Summary: Kafka Connect lets you easily get data into and out of Kafka with a low-code or even no-code approach. However, with Connect comes the burden of having to manage the Connect cluster. This session walks you through the lifecycle of your Connect cluster and covers the key administrative steps you need to understand. We'll cover everything from deciding what topology to use, to adding and removing workers, to starting and stopping connectors. We will go beyond the basic getting started and look at how to best manage your cluster in production. This will include some key metrics to monitor, how to restart tasks, and how to manage your running connectors.

Come along to learn the best practices for managing your Connect cluster and achieving smooth running of your pipelines.

🗣️ Speaker: Kate Stanley, Principal Software Engineer, Red Hat

Kate is a software engineer, technical speaker, and Java Champion. She has experience running Apache Kafka on Kubernetes, developing enterprise Kafka applications, and writing connectors for Kafka Connect. Kate currently works as a principal software engineer in the Red Hat Kafka team and contributes to Strimzi. Alongside development, Kate has a passion for presenting and sharing knowledge. She has presented at conferences around the world, authored two LinkedIn Learning courses and has also written a book on Kafka Connect which was published in 2023.

🎙️ \~Talk 2\~ Talk Title: Apache Kafka and HA/DR: A Primer

Summary: If you’re using Apache Kafka in your company for real-time insights, you know that keeping your data safe and available is critical to your operations. But do you know what your options are for implementing a high-availability, disaster recovery (HA/DR) strategy for your cluster? A good HA/DR strategy will increase the reliability of your mission-critical streaming applications by minimizing data loss and downtime during both day-to-day operations and unexpected disaster scenarios. We’ll begin by exploring how Kafka and high-availability go hand-in-hand. We’ll then introduce and dive into a number of HA/DR strategies used across the Kafka ecosystem––such as active-active, active-passive replication, and multi-regional stretched clusters. From there, we’ll discuss the tools needed to implement and automate many of these strategies. By the end of the session, you’ll know how Kafka works to store your data safely and reliably and how to set your cluster up for success with a solid HA/DR plan.

🗣️ Speaker: Danica Fine, Staff Developer Advocate, Confluent

Danica Fine is a Staff Developer Advocate at Confluent where she helps others get the most out of their event-driven pipelines. Prior to this role, she served as a software engineer on a streaming infrastructure team at Bloomberg where she predominantly worked on Kafka Streams- and Kafka Connect-based projects.

LIMITED SPACES: Kafka Connect & Kafka HA/DR (70 people max)

Taking care of houseplants can be difficult; in many cases, over-watering and under-watering can have the same symptoms. Remove the guesswork involved in caring for your houseplants while also gaining valuable experience in building a practical, event-driven pipeline in your own home! This session explores the process of building a houseplant monitoring and alerting system using a Raspberry Pi and Apache Kafka. Moisture and temperature readings are captured from sensors in the soil and streamed into Kafka. From there, we use stream processing to transform the data, create a summary view of the current state, and drive real-time push alerts through Telegram.

In this session, we will talk about how to ingest the data followed by the tools, including ksqlDB and Kafka Connect, that help transform the raw data into useful information, and finally, You'll be shown how to use Kafka Producers and Consumers to make the entire application more interactive. By the end of this session, you’ll have everything you need to start building practical streaming pipelines in your own home. Roll up your sleeves – let’s get our hands dirty!

Talk by: Danica Fine

Here’s more to explore: Big Book of Data Engineering: 2nd Edition: https://dbricks.co/3XpPgNV The Data Team's Guide to the Databricks Lakehouse Platform: https://dbricks.co/46nuDpI

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

Data Engineering Data Lakehouse Databricks Kafka Data Streaming
Databricks DATA + AI Summit 2023

This demo offers a step-by-step guide on leveraging Debezium and Kafka to replicate data from different databases into CrateDB.

Senior Developer Advocate at Confluent, Danica Fine, will join us and introduce you to Kafka. Following this introduction, our CrateDB experts will present to you Debezium, a distributed platform that turns your existing databases into event streams, and its role in capturing and propagating database changes.

You'll have the opportunity to get detailed instructions from our CrateDB experts for setting up connections for a popular database such as SQL server. Finally, in this demo you'll learn about the practical implementation of data synchronization between SQL server and CrateDB without writing any custom code.

What you will learn

  • Setting up Debezium and Kafka: This demo will guide you through the process of setting up Debezium, an open-source CDC platform, and Apache Kafka, a distributed streaming platform.
  • Replicating Data to CrateDB: We will show how to configure a Debezium connector to replicate data from the source database into CrateDB.
  • Practical demonstration of data synchronization between different database systems: Learn how to replicate changes from SQL Server to CrateDB without having to write any custom code.

📅 July 13, 2023 🕓 04:00–04:45 pm CEST l 🕙 10:00–10:45 am EDT l 🕖 07:00–07:45 am PDT

Speakers Danica Fine, Senior Developer Advocate, Confluent Marija Selakovic, Developer Advocate, CrateDB Hernán Lionel Cianfagna, Senior Solutions Engineer, CrateDB

📢 Make sure to register using this link to be able to join the Live Demo on July 13: https://hubs.ly/Q01WnrtJ0

How to replicate data from other databases to CrateDB with Debezium and Kafka
Showing 5 results