talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (1 result)

Showing 5 results

Activities & events

Title & Speakers Event
Oded Nahum – Global Head of Cloud & Data Streaming Practice @ Ness , Laurentiu Bita – Sr. Data Streaming Architect @ Ness

In this session, we’ll walk through how Apache Flink was used to enable near real-time operational insights using manufacturing IIoT Data sets. The goal: deliver actionable KPIs to production teams with sub-30-second latency, using streaming data pipelines built Kafka, Flink and Grafana. We’ll cover the key architectural patterns that made this possible, including handling structured data joins, managing out-of-order events, and integrating with downstream systems like PostgreSQL and Grafana. We’ll also share real-world performance benchmarks, lessons learned from scaling tests, and practical considerations for deploying Flink in a production-grade, low-latency analytics pipeline. The session will also include a live demo

If you're building Flink-based solutions for time-sensitive operations—whether in manufacturing, IoT, or other domains—this talk will provide proven insights from the field.


DISCLAIMER We don't cater to attendees under the age of 18. If you want to host or speak at a meetup, please email [email protected]

flink Kafka Grafana postgresql
IN-PERSON: Apache Flink® Meetup
Oded Nahum – Global Head of Cloud & Data Streaming Practice @ Ness , Laurentiu Bita – Sr. Data Streaming Architect @ Ness

In this session, we’ll walk through how Apache Flink was used to enable near real-time operational insights using manufacturing IIoT Data sets. The goal: deliver actionable KPIs to production teams with sub-30-second latency, using streaming data pipelines built Kafka, Flink and Grafana. We’ll cover the key architectural patterns that made this possible, including handling structured data joins, managing out-of-order events, and integrating with downstream systems like PostgreSQL and Grafana. We’ll also share real-world performance benchmarks, lessons learned from scaling tests, and practical considerations for deploying Flink in a production-grade, low-latency analytics pipeline. The session will also include a live demo

If you're building Flink-based solutions for time-sensitive operations—whether in manufacturing, IoT, or other domains—this talk will provide proven insights from the field.

flink Kafka Grafana postgresql
IN-PERSON: Apache Flink® Meetup
Sven Erik Knop – Staff Technical Instructor @ Confluent

According to Wikipedia, Infrastructure as Code is the process of managing and provisioning computer data center resources through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. This also applies to resources and reference data, connector plugins, connector configurations, and stream processes to clean up the data.

In this talk, we are going to discuss the use cases based on the Network Rail Data Feeds, the scripts used to spin up the environment and cluster in the Confluent Cloud as well as the different components required for the ingress and processing of the data.

This particular environment is used as a teaching tool for Event Stream Processing for Kafka Streams, ksqlDB, and Flink. Some examples of further processing and visualisation will also be provided.

confluent cloud Terraform connectors flink
IN-PERSON: Apache Flink® Meetup
Sven Erik Knop – Staff Technical Instructor @ Confluent

According to Wikipedia, Infrastructure as Code is the process of managing and provisioning computer data center resources through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. This also applies to resources and reference data, connector plugins, connector configurations, and stream processes to clean up the data.

In this talk, we are going to discuss the use cases based on the Network Rail Data Feeds, the scripts used to spin up the environment and cluster in the Confluent Cloud as well as the different components required for the ingress and processing of the data.

This particular environment is used as a teaching tool for Event Stream Processing for Kafka Streams, ksqlDB, and Flink. Some examples of further processing and visualisation will also be provided.

confluent cloud Terraform connectors flink Kafka ksqldb
IN-PERSON: Apache Flink® Meetup

Price - Free

Thank you to Accenture for sponsoring and hosting this event.

Join us for a In-Person User Group Meeting (LDPaC), where you can network, learn, ask a question, and meet other likeminded folks. These events are a really great opportunity to socialise in an informal learning setting.

Remember to tell your friends and the people you work with; make sure you register as soon as you can. We will need to provide a list of names to Accenture before the event, so to ensure there are no issues with access on the day please make sure you have registered.

17.45 - 18:00 Network 🤝

18.00 - 18:30 Drinks & Pizza 🍕

18:30 - 18:40 Intro🎙️

18:40 - 19:30 Transforming SQL Authentication: Real-World Scenarios with Azure Managed Identity. - Dieter Gobeyn 🔊

Discover how to enhance the security of your SQL databases by transitioning from traditional password-based authentication to a modern, passwordless approach using Azure Managed Identity. In this session, I will explore real-world scenarios that demonstrate the practical implementation of transitioning from connectionstrings to Role-Based Access Control (RBAC) with Managed Identity for Azure applications. Highlighting the benefits of eliminating passwords and simplifying access management.

19:30 - 19:40 10-min Break 🥤

19:40 - 20:30 Flink-Kafka Fusion: Patterns for Building Scalable Stream Processing Applications. - Dunith Dhanushka 🔊

When properly configured, Apache Kafka and Apache Flink provide a solid foundation for building scalable and reliable stream processing applications. However, it is difficult for most data professionals to get this combination right first, leading to project delays and costly mistakes. Consequently, having a set of patterns for stream processing applications is essential, just as it is for other software architectures.

This talk will walk you through more than ten stream processing patterns using Kafka and Flink. First, we discuss various computational patterns covering stateless and stateful operations. Then, we look at several state management patterns for state recovery and maintaining accuracy. Finally, we discuss several non-functional patterns, especially related to the deployment architecture, which helps us achieve high availability, security, and cost reductions of streaming applications. For each pattern, the business motivation, a sample code in Flink, and other related patterns will be provided, making it easy for the audience to grasp the essence.

This talk can benefit data practitioners of all levels, including application developers, data engineers, and architects, by providing them with blueprints for stream processing. Knowing these patterns will save time and avoid making the same mistakes that others have made in the past.

Come and join the Leeds Data Community and start learning and networking! All are welcome!

LDPaC In-Person Meetup 12th Sep: Transforming SQL Auth & Flink-Kafka Fusion
Showing 5 results