talk-data.com
People (8 results)
See all 8 →Activities & events
| Title & Speakers | Event |
|---|---|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-15 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-15 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-15 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-15 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-15 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-15 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-03 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL Apache Flink®️ Meetup w/Viktor Gamov
2025-06-24 · 15:00
Join us for a live, interactive session with Viktor Gamov, co-author of Enterprise Web Development from O'Reilly and Apache Kafka® in Action from Manning, Java Champion and Principal Developer Advocate at Confluent. More details coming soon! RSVP so you don't miss out on this interactive session. 💥 Youtube live stream link: https://www.youtube.com/watch?v=nP7L8EIa_7s |
VIRTUAL Apache Flink®️ Meetup w/Viktor Gamov
|
|
One Does Not Simply Query a Stream
2025-04-14 · 19:00
Viktor Gamov
– Principal Developer Advocate
@ Confluent
Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures.\n\nIn this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
|
|
Melting Icebergs: Enabling Analytical Access to Kafka Data through Iceberg Projections
2025-04-14 · 18:15
An organisation's data has traditionally been split between the operational estate, for daily business operations, and the analytical estate for after-the-fact analysis and reporting. The journey from one side to the other is today a long and torturous one. But does it have to be?\n\nIn the modern data stack Apache Kafka is your defacto standard operational platform and Apache Iceberg has emerged as the champion of table formats to power analytical applications. Can we leverage the best of Iceberg and Kafka to create a powerful solution greater than the sum of its parts?\n\nYes you can and we did!\n\nThis isn't a typical story of connectors, ELT, and separate data stores. We've developed an advanced projection of Kafka data in an Iceberg-compatible format, allowing direct access from warehouses and analytical tools.\n\nIn this talk, we'll cover:\n* How we presented Kafka data for Iceberg processors without moving or transforming data upfront—no hidden ETL!\n* Integrating Kafka's ecosystem into Iceberg, leveraging Schema Registry, consumer groups, and more.\n* Meeting Iceberg's performance and cost reduction expectations while sourcing data directly from Kafka.\n\nExpect a technical deep dive into the protocols, formats, and services we used, all while staying true to our core principles:\n* Kafka as the single source of truth—no separate stores.\n* Analytical processors shouldn't need Kafka-specific adjustments.\n* Operational performance must remain uncompromised.\n* Kafka's mature ecosystem features, like ACLs and quotas, should be reused, not reinvented.\n\nJoin us for a thrilling account of the highs and lows of merging two data giants and stay tuned for the surprise twist at the end! |
|
|
IN PERSON: Apache Kafka® x Apache Iceberg™ Meetup
2025-04-14 · 16:00
Join us for an Apache Kafka® Meetup on Monday, April 14th from 6:00pm hosted by Elaia! Elaia is a full stack tech and deep tech investor. We partner with ambitious entrepreneurs from inception to leadership, helping them navigate the future and the unknown. For over twenty years, we have combined deep scientific and technological expertise with decades of operational experience to back those building tomorrow. From our offices in Paris, Barcelona and Tel Aviv, we have been active partners with over 100 startups including Criteo, Mirakl, Shift Technology, Aqemia and Alice & Bob. 📍Venue: Elaia 21 Rue d'Uzès, 75002 Paris, France IF YOU RSVP HERE, YOU DO NOT NEED TO RSVP @ Paris Apache Kafka® Meetup group. 🗓 Agenda:
💡 Speaker One: Roman Kolesnev, Principal Software Engineer, Streambased Talk: Melting Icebergs: Enabling Analytical Access to Kafka Data through Iceberg Projections Abstract: An organisation's data has traditionally been split between the operational estate, for daily business operations, and the analytical estate for after-the-fact analysis and reporting. The journey from one side to the other is today a long and torturous one. But does it have to be? In the modern data stack Apache Kafka is your defacto standard operational platform and Apache Iceberg has emerged as the champion of table formats to power analytical applications. Can we leverage the best of Iceberg and Kafka to create a powerful solution greater than the sum of its parts? Yes you can and we did! This isn't a typical story of connectors, ELT, and separate data stores. We've developed an advanced projection of Kafka data in an Iceberg-compatible format, allowing direct access from warehouses and analytical tools. In this talk, we'll cover: * How we presented Kafka data for Iceberg processors without moving or transforming data upfront—no hidden ETL! * Integrating Kafka's ecosystem into Iceberg, leveraging Schema Registry, consumer groups, and more. * Meeting Iceberg's performance and cost reduction expectations while sourcing data directly from Kafka. Expect a technical deep dive into the protocols, formats, and services we used, all while staying true to our core principles: * Kafka as the single source of truth—no separate stores. * Analytical processors shouldn't need Kafka-specific adjustments. * Operational performance must remain uncompromised. * Kafka's mature ecosystem features, like ACLs and quotas, should be reused, not reinvented. Join us for a thrilling account of the highs and lows of merging two data giants and stay tuned for the surprise twist at the end! Bio: Roman is a Principal Software Engineer at Streambased. His experience includes building business critical event streaming applications and distributed systems in the financial and technology sectors. 💡 Speaker Two: Viktor Gamov, Principal Developer Advocate, Confluent One Does Not Simply Query a Stream Abstract: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. Bio: Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. . With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. As a Java Champion and an esteemed speaker, Viktor is known for his insightful presentations at top industry events like JavaOne, Devoxx, Kafka Summit, and QCon. His expertise spans distributed systems, real-time data streaming, JVM, and DevOps. Viktor has co-authored "Enterprise Web Development" from O'Reilly and "Apache Kafka® in Action" from Manning. Follow Viktor on X - @gamussa to stay updated with Viktor's latest thoughts on technology, his gym and food adventures, and insights into open-source and developer advocacy. *** DISCLAIMER We cannot cater to those under the age of 18. If you would like to speak at / host a future meetup, please reach out to [email protected] |
IN PERSON: Apache Kafka® x Apache Iceberg™ Meetup
|
|
Real-Time Analytics in the Corporate World: How Apache Pinot Powers Industry Leaders
2024-03-28 · 21:30
Viktor Gamov
– Principal Developer Advocate
@ Confluent
Explore how industry leaders like LinkedIn, Uber Eats, and Stripe are mastering real-time data with Viktor as your guide. Discover how Apache Pinot transforms data into actionable insights instantly. Viktor will showcase Pinot's features, including the Star-Tree Index, and explain why it's a game-changer in data strategy. This session is for everyone, from data geeks to business gurus, eager to uncover the future of tech. Join us and be wowed by the power of real-time analytics with Apache Pinot! |
|
|
Codeless GenAI Pipelines with Flink, Kafka, NiFi
2024-03-28 · 21:30
Explore the power of real-time streaming with GenAI using Apache NiFi. Learn how NiFi simplifies data engineering workflows, allowing you to focus on creativity over technical complexities. Tim Spann will guide you through practical examples, showcasing NiFi's automation impact from ingestion to delivery. Whether you're a seasoned data engineer or new to GenAI, this talk offers valuable insights into optimizing workflows. Join us to unlock the potential of real-time streaming and witness how NiFi makes data engineering a breeze for GenAI applications! |
|
|
[Tech Talk] Getting Started with Service Mesh Workshop 102: Traffic Policies
2023-08-31 · 16:00
Join Viktor Gamov, Principal Developer Advocate at Kong, for an advanced service mesh workshop. You’ll learn the practical application of key Service Mesh policies, including mTLS, Rate Limiting, Retry, and VirtualOutbound. In this hands-on workshop: – Viktor will break down the function and significance of mTLS for secure network communication. – Rate Limiting will be explored, highlighting its importance in preventing overloads and managing resources. – The Retry policy’s role in enhancing reliability and fault tolerance will be examined. – We’ll also delve into the mechanisms of VirtualOutbound policies in controlling traffic flow. Join us to elevate your understanding and skills in implementing these crucial service mesh policies using Kuma or Kong Mesh. |
[Tech Talk] Getting Started with Service Mesh Workshop 102: Traffic Policies
|
|
[Tech Talk] Getting Started with Service Mesh Workshop 102: Traffic Policies
2023-08-31 · 16:00
Join Viktor Gamov, Principal Developer Advocate at Kong, for an advanced service mesh workshop. You’ll learn the practical application of key Service Mesh policies, including mTLS, Rate Limiting, Retry, and VirtualOutbound. In this hands-on workshop: – Viktor will break down the function and significance of mTLS for secure network communication. – Rate Limiting will be explored, highlighting its importance in preventing overloads and managing resources. – The Retry policy’s role in enhancing reliability and fault tolerance will be examined. – We’ll also delve into the mechanisms of VirtualOutbound policies in controlling traffic flow. Join us to elevate your understanding and skills in implementing these crucial service mesh policies using Kuma or Kong Mesh. |
[Tech Talk] Getting Started with Service Mesh Workshop 102: Traffic Policies
|
|
[Tech Talk] Getting Started with Service Mesh Workshop 102: Traffic Policies
2023-08-31 · 16:00
Join Viktor Gamov, Principal Developer Advocate at Kong, for an advanced service mesh workshop. You’ll learn the practical application of key Service Mesh policies, including mTLS, Rate Limiting, Retry, and VirtualOutbound. In this hands-on workshop: – Viktor will break down the function and significance of mTLS for secure network communication. – Rate Limiting will be explored, highlighting its importance in preventing overloads and managing resources. – The Retry policy’s role in enhancing reliability and fault tolerance will be examined. – We’ll also delve into the mechanisms of VirtualOutbound policies in controlling traffic flow. Join us to elevate your understanding and skills in implementing these crucial service mesh policies using Kuma or Kong Mesh. |
[Tech Talk] Getting Started with Service Mesh Workshop 102: Traffic Policies
|
|
[Tech Talk] Getting Started with Service Mesh Workshop 102: Traffic Policies
2023-08-31 · 16:00
Join Viktor Gamov, Principal Developer Advocate at Kong, for an advanced service mesh workshop. You’ll learn the practical application of key Service Mesh policies, including mTLS, Rate Limiting, Retry, and VirtualOutbound. In this hands-on workshop: – Viktor will break down the function and significance of mTLS for secure network communication. – Rate Limiting will be explored, highlighting its importance in preventing overloads and managing resources. – The Retry policy’s role in enhancing reliability and fault tolerance will be examined. – We’ll also delve into the mechanisms of VirtualOutbound policies in controlling traffic flow. Join us to elevate your understanding and skills in implementing these crucial service mesh policies using Kuma or Kong Mesh. |
[Tech Talk] Getting Started with Service Mesh Workshop 102: Traffic Policies
|
|
[Tech Talk] Gateway to the Future: A Brief History of Kubernetes Ingress
2023-07-13 · 16:00
Join Viktor Gamov, a Principal Developer Advocate with Kong, as he navigates the evolution of Kubernetes Ingress. Starting with the basics, he’ll explain how Ingress is pivotal in managing external access to services within a Kubernetes cluster through HTTP and HTTPS routes. In this talk, Viktor will also address its limitations – the challenge of managing multiple Ingress resources, inconsistencies across different controllers, and inherent limitations in handling non-L7 protocols like TCP and UDP. The talk then shifts gear toward the future – the Gateway API. This next evolution of Kubernetes networking expands beyond HTTP/HTTPS, promising improved traffic routing and the capability to handle diverse protocols, thus addressing Ingress’s limitations. Viktor will explain how the Gateway API introduces resources like GatewayClass and Gateway to provide a flexible, structured way of defining traffic routing paths, simplifying traffic management in complex environments. Adding to the mix, he will hint at how the Kong Gateway and Ingress Controller can leverage the Gateway API to extend their capabilities, offering a robust and more flexible networking experience. Whether you’re a Kubernetes beginner or a seasoned pro, this talk promises a wealth of insights. Join us to explore Kubernetes Ingress and step into its promising future with the Gateway API. |
[Tech Talk] Gateway to the Future: A Brief History of Kubernetes Ingress
|
|
[Tech Talk] Gateway to the Future: A Brief History of Kubernetes Ingress
2023-07-13 · 16:00
Join Viktor Gamov, a Principal Developer Advocate with Kong, as he navigates the evolution of Kubernetes Ingress. Starting with the basics, he’ll explain how Ingress is pivotal in managing external access to services within a Kubernetes cluster through HTTP and HTTPS routes. In this talk, Viktor will also address its limitations – the challenge of managing multiple Ingress resources, inconsistencies across different controllers, and inherent limitations in handling non-L7 protocols like TCP and UDP. The talk then shifts gear toward the future – the Gateway API. This next evolution of Kubernetes networking expands beyond HTTP/HTTPS, promising improved traffic routing and the capability to handle diverse protocols, thus addressing Ingress’s limitations. Viktor will explain how the Gateway API introduces resources like GatewayClass and Gateway to provide a flexible, structured way of defining traffic routing paths, simplifying traffic management in complex environments. Adding to the mix, he will hint at how the Kong Gateway and Ingress Controller can leverage the Gateway API to extend their capabilities, offering a robust and more flexible networking experience. Whether you’re a Kubernetes beginner or a seasoned pro, this talk promises a wealth of insights. Join us to explore Kubernetes Ingress and step into its promising future with the Gateway API. |
[Tech Talk] Gateway to the Future: A Brief History of Kubernetes Ingress
|
|
[Tech Talk] Gateway to the Future: A Brief History of Kubernetes Ingress
2023-07-13 · 16:00
Join Viktor Gamov, a Principal Developer Advocate with Kong, as he navigates the evolution of Kubernetes Ingress. Starting with the basics, he’ll explain how Ingress is pivotal in managing external access to services within a Kubernetes cluster through HTTP and HTTPS routes. In this talk, Viktor will also address its limitations – the challenge of managing multiple Ingress resources, inconsistencies across different controllers, and inherent limitations in handling non-L7 protocols like TCP and UDP. The talk then shifts gear toward the future – the Gateway API. This next evolution of Kubernetes networking expands beyond HTTP/HTTPS, promising improved traffic routing and the capability to handle diverse protocols, thus addressing Ingress’s limitations. Viktor will explain how the Gateway API introduces resources like GatewayClass and Gateway to provide a flexible, structured way of defining traffic routing paths, simplifying traffic management in complex environments. Adding to the mix, he will hint at how the Kong Gateway and Ingress Controller can leverage the Gateway API to extend their capabilities, offering a robust and more flexible networking experience. Whether you’re a Kubernetes beginner or a seasoned pro, this talk promises a wealth of insights. Join us to explore Kubernetes Ingress and step into its promising future with the Gateway API. |
[Tech Talk] Gateway to the Future: A Brief History of Kubernetes Ingress
|