talk-data.com
People (1 result)
Activities & events
| Title & Speakers | Event |
|---|---|
|
Real-Time Data & Apache Flink® — NYC Meetup
2026-01-15 · 23:00
Join Ververica and open-source practitioners in New York for a casual, technical evening dedicated to real-time data processing and Apache Flink®. This meetup is designed for engineers and architects who want to delve beyond high-level discussions and explore how streaming systems work in practice. The evening will focus on Apache Flink®’s future direction, real-world operational challenges, and hands-on lessons from running streaming systems in production. Agenda: 18:00–18:30 \| Arrival\, Snacks & Drinks 18:30–18:55 \| Keynote Ben Gamble — Future Shock: Apache Flink® Five Years From Now 18:55–19:20 \| Speaker from an open-source–driven company (Speaker and topic to be announced closer to the event date) 19:20–19:50 \| Technical deep dive Ben Gamble — What No One Tells You About Operating Streaming Systems 19:50–20:30 \| Networking Meet-up location: WeWork in central New York Exact address will be shared closer to the event date |
Real-Time Data & Apache Flink® — NYC Meetup
|
|
Tech Talk: Simplifying Kafka and Troubleshooting data
2024-05-14 · 21:30
Kafka
|
|
|
Real-time Data Streaming and Processing (NYC)
2024-05-14 · 21:30
** Important RSVP here (Due to room capacity and building security, you must pre-register at the link for admission) Description: Welcome to the AI meetup in New York City, in collaboration with Lenses, Imply. Join us for deep dive tech talks on AI, GenAI, LLMs, ML and Real-time Data Streaming and Processing, with food/drink, networking with speakers and fellow developers. Agenda: - 5:30pm\~6:00pm: Checkin\, food and networking - 6:10pm\~8:00pm: Tech talks and Q&A - 8:00pm\~8:30pm: Open discussion\, Mixer and Closing. Tech Talk: Simplifying Kafka and Troubleshooting data Speaker: Guillaume Ayme (Lenses.io) Abstract: In this session, we'll present with a real-life example of how we implemented a solution that allows developers to have complete control over their event-driven application (EDA), resulting in lightning-fast delivery of new streaming services and quick troubleshooting of any issues that arise. From fine-tuning data pipelines, troubleshooting data issues, offsetting topics, backing up to S3 and seeing your topics topology to harnessing the power of real-time data, developers and engineers will gain invaluable insights into how Lenses.io optimizes data workflows and enhances operational efficiency. This presentation will offer a rich tapestry of technical knowledge, hands-on demonstrations, and networking opportunities. Tech Talk: Streaming data meets real-time analytics Speaker: Zeke Dean (Imply) Abstract: You’re generating events. Lots of events. You’re using data streaming to reliably move those events between applications. How do you use this streaming data to your best advantage, gaining insights and driving automated decision making? Imply is the database solution for real-time analytics from the creators of Apache Druid. With high-concurrency subsecond queries of data of any size, combining true stream ingestion with historical batch data, Imply enables fast queries with aggregations and roll-ups while maintaining access to granular data. In this tech talk, we’ll take a look at the uses of real-time analytics and how it can be used with streaming data. Venue: Celonis, 1 World Trade Center 70th Floor, New York, NY 10006 Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics Sponsors: We are actively seeking sponsors to support our community. Whether it is by offering venue spaces, providing food/drink, or cash sponsor. Sponsors will not only speak at the meetups, receive prominent recognition, but also gain exposure to our extensive membership base of 20,000+ AI developers in New York or 350K+ worldwide. Community on Slack/Discord
|
Real-time Data Streaming and Processing (NYC)
|
|
Easier Stream Processing On Kafka With ksqlDB
2020-03-02 · 18:00
Michael Drogalis
– Product Manager for ksqlDB
@ Confluent
,
Tobias Macey
– host
Summary Building applications on top of unbounded event streams is a complex endeavor, requiring careful integration of multiple disparate systems that were engineered in isolation. The ksqlDB project was created to address this state of affairs by building a unified layer on top of the Kafka ecosystem for stream processing. Developers can work with the SQL constructs that they are familiar with while automatically getting the durability and reliability that Kafka offers. In this episode Michael Drogalis, product manager for ksqlDB at Confluent, explains how the system is implemented, how you can use it for building your own stream processing applications, and how it fits into the lifecycle of your data infrastructure. If you have been struggling with building services on low level streaming interfaces then give this episode a listen and try it out for yourself. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, a 40Gbit public network, fast object storage, and a brand new managed Kubernetes platform, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. And for your machine learning workloads, they’ve got dedicated CPU and GPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! Are you spending too much time maintaining your data pipeline? Snowplow empowers your business with a real-time event data pipeline running in your own cloud account without the hassle of maintenance. Snowplow takes care of everything from installing your pipeline in a couple of hours to upgrading and autoscaling so you can focus on your exciting data projects. Your team will get the most complete, accurate and ready-to-use behavioral web and mobile data, delivered into your data warehouse, data lake and real-time streams. Go to dataengineeringpodcast.com/snowplow today to find out why more than 600,000 websites run Snowplow. Set up a demo and mention you’re a listener for a special offer! You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Corinium Global Intelligence, ODSC, and Data Council. Upcoming events include the Software Architecture Conference in NYC, Strata Data in San Jose, and PyCon US in Pittsburgh. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Michael Drogalis about ksqlDB, the open source streaming database layer for Kafka Interview Introduction How did you get involved in the area of data management? Can you start by describing what ksqlDB is? What are some of the use cases that it is designed for? How do the capabilities and design of ksqlDB compare to other solutions for querying streaming data with SQL such as Pulsar SQL, PipelineDB, or Materialize? What was the motivation for building a unified project for providing a database interface on the data stored in Kafka? How is ksqlDB architected? If you were to rebuild the entire platform and its components from scratch today, what would you do differently? What is the workflow for an analyst or engineer to design and build an application on top of ksqlDB? What dialect of SQL is supported? What ki |
Data Engineering Podcast |