talk-data.com
People (12 results)
See all 12 →Activities & events
| Title & Speakers | Event |
|---|---|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-15 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-15 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-15 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-15 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-15 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-15 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
VIRTUAL: One Does Not Simply Query a Stream
2025-12-03 · 19:00
IMPORTANT: PLEASE RSVP @ https://luma.com/7lonmd1t?tk=4gVuhX *** “One Does Not Simply Query a Stream” with Viktor Gamov. Viktor Gamov is a Principal Developer Advocate at Confluent, founded by the original creators of Apache Kafka®. With a rich background in implementing and advocating for distributed systems and cloud-native architectures, Viktor excels in open-source technologies. He is passionate about assisting architects, developers, and operators in crafting systems that are not only low in latency and scalable but also highly available. What to expect: Streaming data with Apache Kafka® has become the backbone of modern day applications. While streams are ideal for continuous data flow, they lack built-in querying capability. Unlike databases with indexed lookups, Kafka's append-only logs are designed for high throughput processing, not for on-demand querying. This necessitates teams to build additional infrastructure to enable query capabilities for streaming data. Traditional methods replicate this data into external stores such as relational databases like PostgreSQL for operational workloads and object storage like S3 with Flink, Spark, or Trino for analytical use cases. While useful sometimes, these methods deepen the divide between operational and analytical estates, creating silos, complex ETL pipelines, and issues with schema mismatches, freshness, and failures. In this session, we’ll explore and see live demos of some solutions to unify the operational and analytical estates, eliminating data silos. We’ll start with stream processing using Kafka Streams, Apache Flink®, and SQL implementations, then cover integration of relational databases with real-time analytics databases such as Apache Pinot® and ClickHouse. Finally, we’ll dive into modern approaches like Apache Iceberg® with Tableflow, which simplifies data preparation by seamlessly representing Kafka topics and associated schemas as Iceberg or Delta tables in a few clicks. While there's no single right answer to this problem, as responsible system builders, we must understand our options and trade-offs to build robust architectures. |
VIRTUAL: One Does Not Simply Query a Stream
|
|
From Crypto Streams to AI-Powered Predictions
2025-12-01 · 18:30
Olena Kutsenko
– Staff Developer Advocate
@ Confluent
In this 2-hour hands-on workshop, you'll build an end-to-end streaming analytics pipeline that captures live cryptocurrency prices, processes them in real-time, and uses AI to forecast the future. Ingest live crypto data into Apache Kafka using Kafka Connect; tame that chaos with Apache Flink's stream processing; freeze streams into queryable Apache Iceberg tables using Tableflow; and forecast price trends with Flink AI. |
Crypto Streams to AI Predictions: Apache Kafka®, Apache Flink® & Apache Iceberg®
|
|
Stop Stitching Dashboards: How Kafka + Iceberg Deliver Seamless Time Travel
2025-11-27 · 18:15
Grafana's magic comes from seeing what's happening right now and instantly comparing it to everything that has happened before. Real-time data lets teams spot anomalies the moment they emerge. Long-term data reveals whether those anomalies are new, seasonal, or the same gremlins haunting your system every quarter. But actually building this capability? That's where everything gets messy. Today's dashboards are cobbled together from two very different worlds: Long-term data living in lakes and warehouses; Real-time streams blasting through Kafka or similar systems. These systems rarely fit together cleanly, which forces dashboard developers to wrestle with: Differing processing concepts - What does SQL even mean on a stream? Inconsistent governance - Tables vs. message schemas, different owners, different rules; Incomplete history - Not everything is kept forever, and you never know what will vanish next; Maintenance drift - As pipelines evolve, your ETL always falls behind. But what if there were no separation at all? Join us for a deep dive into a new, unified approach where real-time and historical data live together in a single, seamless dataset. Imagine dashboards powered by one source of truth that stretches from less than one second ago to five, ten, or even fifteen years into the past, without stitching, syncing, or duct-taping systems together. Using Apache Kafka, Apache Iceberg, and a modern architectural pattern that eliminates the old 'batch vs. stream' divide, we'll show how to: Build Grafana dashboards that just work with consistent semantics end-to-end; Keep every message forever without drowning in storage costs; Query real-time and historical data with the same language, same governance, same everything; Escape the ETL death spiral once and for all. If you've ever wished your dashboards were both lightning-fast and infinitely deep, this talk will show you how close that future really is. |
Grafana & Friends France : édition de novembre chez Elaia
|
|
Data Engineering for Beginners
2025-11-11
Chisom Nwokwu
– author
A hands-on technical and industry roadmap for aspiring data engineers In Data Engineering for Beginners, big data expert Chisom Nwokwu delivers a beginner-friendly handbook for everyone interested in the fundamentals of data engineering. Whether you're interested in starting a rewarding, new career as a data analyst, data engineer, or data scientist, or seeking to expand your skillset in an existing engineering role, Nwokwu offers the technical and industry knowledge you need to succeed. The book explains: Database fundamentals, including relational and noSQL databases Data warehouses and data lakes Data pipelines, including info about batch and stream processing Data quality dimensions Data security principles, including data encryption Data governance principles and data framework Big data and distributed systems concepts Data engineering on the cloud Essential skills and tools for data engineering interviews and jobs Data Engineering for Beginners offers an easy-to-read roadmap on a seemingly complicated and intimidating subject. It addresses the topics most likely to cause a beginning data engineer to stumble, clearly explaining key concepts in an accessible way. You'll also find: A comprehensive glossary of data engineering terms Common and practical career paths in the data engineering industry An introduction to key cloud technologies and services you may encounter early in your data engineering career Perfect for practicing and aspiring data analysts, data scientists, and data engineers, Data Engineering for Beginners is an effective and reliable starting point for learning an in-demand skill. It's a powerful resource for everyone hoping to expand their data engineering Skillset and upskill in the big data era. |
O'Reilly Data Engineering Books
|
|
Delta Force: What the Fuss with Fluss in Flink 2.x
2025-10-23 · 19:00
Anton Borisov
– Principal Data Engineer
@ Fresha
The next generation of streaming isn't about faster pipelines, but about smarter connections. DeltaJoin, a new operator in Apache Flink, reimagines stream joins by moving from brute-force state to change-driven computation. Paired with Fluss, Flink's purpose-built storage layer, it enables systems that are real-time, scalable, and cost-efficient. Anton will show how DeltaJoin and Fluss shift streaming architecture from ephemeral flows to durable, queryable state that bridges real-time processing with lakehouse patterns. Drawing on production experience, he'll demonstrate how these innovations reduce join costs, simplify architectures, and unlock new possibilities for real-time analytics. Attendees will leave with a vision of Flink 2.x as the backbone for event-driven systems and modern data platforms. |
Message Tracking, Fluss in Apache Flink 2.x, & Kafka-to-Iceberg Transformation
|
|
Delta Force: What the Fuss with Fluss in Flink 2.x
2025-10-23 · 19:00
Anton Borisov
– Principal Data Engineer
@ Fresha
The next generation of streaming isn't about faster pipelines, but about smarter connections. DeltaJoin, a new operator in Apache Flink, reimagines stream joins by moving from brute-force state to change-driven computation. Paired with Fluss, Flink's purpose-built storage layer, it enables systems that are real-time, scalable, and cost-efficient. Anton will show how DeltaJoin and Fluss shift streaming architecture from ephemeral flows to durable, queryable state that bridges real-time processing with lakehouse patterns. Drawing on production experience, he'll demonstrate how these innovations reduce join costs, simplify architectures, and unlock new possibilities for real-time analytics. Attendees will leave with a vision of Flink 2.x as the backbone for event-driven systems and modern data platforms. |
IN PERSON: Tooling for running Apache Kafka in Production
|
|
The Agentic Future: How Streaming is Evolving for AI w/ Tyler Akidau
2025-10-15 · 07:32
Tyler Akidau
– guest
,
Joe Reis
– founder
@ Ternary Data
The world of data is being reset by AI, and the infrastructure needs to evolve with it. I sit down with streaming legend Tyler Akidau to discuss how the principles of stream processing are forming the foundation for the next generation of "agentic AI" systems. Tyler, who was an AI cynic until recently, explains why he's now convinced that AI agents will fundamentally change how businesses operate and what problems we need to solve to deploy them safely. Key topics we explore: From Human Analytics to Agentic Systems: How data architectures built for human analysis must be re-imagined for a world with thousands of AI agents operating at machine speed.Auditing Everything: Why managing AI requires a new level of governance where we must record all data an agent touches, not just metadata, to diagnose its complex and opaque behaviorThe End of Windowing's Dominance: Tyler reflects on the influential Dataflow paper he co-authored and explains why he now sees a table-based abstraction as a more powerful and user-friendly model than focusing on windowing.The D&D Alignment of AI: Tyler's brilliant analogy for why enterprises are struggling to adopt AI: we're trying to integrate "chaotic" agents into systems built for "lawful good" employees.A Reset for the Industry: Why the rise of AI feels like the early 2010s of streaming, where the problems are unsolved and everyone is trying to figure out the answers. |
The Joe Reis Show |
|
AWS User Group Berlin Session - October 2025
2025-10-14 · 16:30
Dear Community, Another month with two great speakers gladly joining us. Besides we kept our promise! The AWS DevTools Hero from Cologne joins us with his remarkable talk presented at Summit Hamburg. This month is also special, because the event is self-hosted. You read it right! We have our own place now and we organise this month event ourselves. More details below, we look forward meeting you all in our new home, NLND Berlin! =================================================== 18:30 - Warming up and networking chat 18:45 - Intro by AWS UG Berlin 19:00 - 19:30 - Christian Bonzelet // From Glass-To-Glass: Building Live Streaming Solutions with AWS Elemental Services You want to build the next Twitch? You are sure it should run on AWS but you don’t know how to start? Or maybe you're just curious about how streaming giants deliver smooth video experiences to millions of viewers? This session is for you! Building video streaming solutions used to be a complex journey requiring deep expertise in protocols, codecs, and infrastructure management. But I've got good news: video processing has evolved into a commodity from complex infrastructure to composable cloud services. Let me guide you on how to build a professional livestreaming solution on AWS. From raw camera ingest to engaging viewer experience, you will discover how AWS Elemental services can be pieced together like building blocks to create powerful streaming solutions. After this session, you will be all set to turn your streaming ideas into reality. Now, go stream! 19:30 - 20:00 - Lukas Müller // Beyond the Code: How We Build PartyRock at AWS PartyRock enables anyone to build and share AI-powered apps without writing code - but how do you build such a product at AWS? I'll pull back the curtain on PartyRock's architecture, and spoiler: it's less complex than you might expect. We run completely serverless with a surprisingly straightforward stack. But simple doesn't mean easy - I'll share technical lessons learned that you'll want to consider for your own GenAI services. Then we'll tackle the real challenge: users understand chat interfaces, but PartyRock is something different. How do you explain to non-technical users that they can create AI apps by connecting components instead of having conversations? You'll leave with practical architectural patterns, technical gotchas to avoid, and insights about the often-overlooked challenge of making GenAI genuinely accessible beyond just chat interfaces. 20:00 - 20:20 - Networking break with food, snacks and drinks 20:20 - 21:00 - Common Q&A held by Christian & Lukas =================================================== Very Important: There is no "waitlist" for our regular sessions. However, there are limited seats available. If you want to make sure you can attend: Register yourself with your "full name" here at meetup.com Arrive on time - seats are first come, first serve. As soon as there are seats available, you are welcome to join with your registration. In case there are no more seats available, we won't be able to let you join us this time.We thank you very much for your understanding! =================================================== Additional Information This event is wheelchair friendly. Help us spread the word, and invite your friends & colleagues! If you're attending with wheelchair and need assistance, please mail us: [email protected] for further details. Would you like to host AWS UG Berlin events at your company? Register here Would you like to speak at AWS UG Berlin sessions? Submit your talk here |
AWS User Group Berlin Session - October 2025
|
|
A deep dive into Ethereum and the Reth Ethereum client.
2025-09-30 · 17:30
Blockchain networks are built from multiple clients that communicate via a peer-to-peer (P2P) network. For a long time, the Go Ethereum client (Geth) has been the dominant force in blockchain. However, the Rust Ethereum client (Reth) is a significant challenger, being faster and more compact than the Go equivalent, costing less to run and able to handle breakneck execution speeds. Rust brings modularity and high performance to Ethereum. With Geth, it is necessary to fork the repo, but with Reth, you can add components such as consensus whilst using Reth as a library. |
|
|
Building AI Copilots in Rust with Rig
2025-09-30 · 17:30
AI copilots are reshaping how we interact with software—but what makes them really work under the hood? In this talk, we unpack the design patterns and architectural choices that go into building a reliable copilot, then show how Rust and Rig make it practical to implement. From streaming responses to modular agents, we'll cover the key techniques while demystifying systems that enhance and power many developers' workflows. |
|
|
Wingfoil: Stream-Oriented Programming for Latency-Critical Systems
2025-09-30 · 17:30
Wingfoil is a blazingly fast, highly scalable stream processing framework designed for ultra-low-latency use cases such as electronic trading and real-time AI. Embracing stream-oriented programming makes it simple to receive, process, and distribute streaming data. This talk will explore where stream-oriented techniques are most effective and demonstrate how Wingfoil can be leveraged to build robust, high-performance systems. |
|
|
REDscript - building a compiler and language tooling in Rust
2025-09-30 · 17:30
My journey in reverse engineering the Cyberpunk 2077 game engine culminated in the creation of a new programming language and a comprehensive development toolchain, which included a compiler, decompiler, and additional tools. |
|
|
🌈 Gopheria : Go 1.25⚬Démos ⚬ Bavardages ⚬ Breuvages
2025-09-30 · 16:30
Hello Gophers, en cette fin septembre les co-orgs sont en vacances ou occupés avec la rentrée, mais on s'est dit que ça ne devrait pas nous empêcher de bavarder autour d'un bon breuvage, prétexte à se retrouver dans un cadre informel, pour faire connaissances avec les nouvelles gopherettes et les nouveaux gophers. Parler actu, projets OSS, succès/frictions, et s'échanger des prompts :) Sujets proposés
Guest stars
( nos excuses si vous n'êtes pas dans la liste : ping Fred ce soir pour faire connaître vos exploits ou super pouvoirs ! :> ) Entrer en contact avec les co-orgs Soit via la messagerie meetup ici : (mais parfois on ne voit pas les notifs) OU mieux : Par email : à gaufres à golang paris .org A bientôt o/ L'équipe Golang Paris 🌈 |
🌈 Gopheria : Go 1.25⚬Démos ⚬ Bavardages ⚬ Breuvages
|
|
Caroline Morton
– medical doctor, epidemiologist, software engineer, PhD candidate
@ Women in Rust
I don't have a background in functional programming - and I never set out to write it. But somewhere between writing trait-based epidemiological pipelines, composing data transformations, and leaning hard on Result, enums, and pattern matching, I started hearing from others: 'That's pretty functional.' In this talk, I'll explore what it means to write functional-ish Rust as someone solving real-world scientific problems. I'll walk through the patterns I reach for - like chaining iterators, avoiding shared state, and embracing expressive types - and reflect on which functional programming ideas emerge naturally in Rust, even if you're not trying. I'll also share how designing for epidemiologists - most of whom are used to chaining functions in Python (like Pandas) or R - has pushed me toward creating ergonomic Rust APIs with Python and R bindings. These tools aim to feel familiar to scientists while leveraging Rust's power and safety under the hood. This is a talk for functional programmers curious about Rust, and for Rustaceans wondering if they've been functional all along. No formal theory required - just real code, real use cases, and a pragmatic perspective from someone building public health tools in Rust. |
Women in Scala X Rust: Functional Programming in Rust & Streams with Aquascape
|