talk-data.com
People (3 results)
Activities & events
| Title & Speakers | Event |
|---|---|
|
Real-Time Manufacturing Insights with Apache Flink and Kafka
2025-07-03 · 19:00
Oded Nahum
– Global Head of Cloud & Data Streaming Practice
@ Ness
,
Laurentiu Bita
– Sr. Data Streaming Architect
@ Ness
In this session, we’ll walk through how Apache Flink was used to enable near real-time operational insights using manufacturing IIoT Data sets. The goal: deliver actionable KPIs to production teams with sub-30-second latency, using streaming data pipelines built Kafka, Flink and Grafana. We’ll cover the key architectural patterns that made this possible, including handling structured data joins, managing out-of-order events, and integrating with downstream systems like PostgreSQL and Grafana. We’ll also share real-world performance benchmarks, lessons learned from scaling tests, and practical considerations for deploying Flink in a production-grade, low-latency analytics pipeline. The session will also include a live demo If you're building Flink-based solutions for time-sensitive operations—whether in manufacturing, IoT, or other domains—this talk will provide proven insights from the field. DISCLAIMER We don't cater to attendees under the age of 18. If you want to host or speak at a meetup, please email [email protected] |
IN-PERSON: Apache Flink® Meetup
|
|
Real-Time Manufacturing Insights with Apache Flink and Kafka
2025-07-03 · 19:00
Oded Nahum
– Global Head of Cloud & Data Streaming Practice
@ Ness
,
Laurentiu Bita
– Sr. Data Streaming Architect
@ Ness
In this session, we’ll walk through how Apache Flink was used to enable near real-time operational insights using manufacturing IIoT Data sets. The goal: deliver actionable KPIs to production teams with sub-30-second latency, using streaming data pipelines built Kafka, Flink and Grafana. We’ll cover the key architectural patterns that made this possible, including handling structured data joins, managing out-of-order events, and integrating with downstream systems like PostgreSQL and Grafana. We’ll also share real-world performance benchmarks, lessons learned from scaling tests, and practical considerations for deploying Flink in a production-grade, low-latency analytics pipeline. The session will also include a live demo If you're building Flink-based solutions for time-sensitive operations—whether in manufacturing, IoT, or other domains—this talk will provide proven insights from the field. |
IN-PERSON: Apache Flink® Meetup
|
|
Spinning up an Event Streaming Environment from Scratch Confluent Cloud, Terraform, Connectors and Flink in Action
2025-07-03 · 18:30
Sven Erik Knop
– Staff Technical Instructor
@ Confluent
According to Wikipedia, Infrastructure as Code is the process of managing and provisioning computer data center resources through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. This also applies to resources and reference data, connector plugins, connector configurations, and stream processes to clean up the data. In this talk, we are going to discuss the use cases based on the Network Rail Data Feeds, the scripts used to spin up the environment and cluster in the Confluent Cloud as well as the different components required for the ingress and processing of the data. This particular environment is used as a teaching tool for Event Stream Processing for Kafka Streams, ksqlDB, and Flink. Some examples of further processing and visualisation will also be provided. |
IN-PERSON: Apache Flink® Meetup
|
|
Spinning up an Event Streaming Environment from Scratch Confluent Cloud, Terraform, Connectors and Flink in Action
2025-07-03 · 18:30
Sven Erik Knop
– Staff Technical Instructor
@ Confluent
According to Wikipedia, Infrastructure as Code is the process of managing and provisioning computer data center resources through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. This also applies to resources and reference data, connector plugins, connector configurations, and stream processes to clean up the data. In this talk, we are going to discuss the use cases based on the Network Rail Data Feeds, the scripts used to spin up the environment and cluster in the Confluent Cloud as well as the different components required for the ingress and processing of the data. This particular environment is used as a teaching tool for Event Stream Processing for Kafka Streams, ksqlDB, and Flink. Some examples of further processing and visualisation will also be provided. |
IN-PERSON: Apache Flink® Meetup
|