talk-data.com
People (598 results)
See all 598 →Companies (3 results)
Activities & events
| Title & Speakers | Event |
|---|---|
|
Data Engineers London: Real Time Data - January 2026
2026-01-22 · 18:00
Join us at our first event of the year at The Information Lab on the historic Watling Street in the City of London 🙌 We will be kicking off 2026 by delving into the topic of real-time data with our speakers - Sam, Nicoleta & Anton. We are running this event in collaboration with Confluent. 6pm: Doors Open 6:30pm: Talks Start 🗣️The Speakers🗣️ Load-In to Lights-Out: Data Engineering the World's Biggest Tours and Live Events Sam Malcolm, Head of Architecture & Engineering at Centrus (Sam's Linkedin) Sam’s session dives into lessons from large-scale live event data systems—handling over 10 billion data points per second for global tours like Beyoncé, Coldplay, and Glastonbury. He connects the extreme demands of real-time analytics and high-performance networking to modern cloud data practices, showing how the same principles of speed, resilience, and precision apply when designing reliable, scalable data platforms today. Should I Stream or Should I Join: From Regular to Delta Joins in Apache Flink Nicoleta Lazar, Senior Data Engineer at Fresha & Anton Borisov, Principal Engineer at Fresha (Niloceta's LinkedIn , Anton's LinkedIn) Joins in the streaming world are where the fun stops and the tradeoffs start. State that grows forever, latency that spikes unpredictably, watermarks that never quite behave, every Flink developer has war stories about this. In this session, Anton Borisov and Nicoleta Lazar break down the join landscape in Apache Flink: → Regular joins and the state explosion problem → Interval joins: when they work, when they don't → Temporal joins and the versioned table dance → Lookup joins: the escape hatch and its hidden costs → Delta joins: the new kid and how Fluss enables them, and why it matters Talks finish by 8pm and there will be a break between the talks. Afterwards, we may head to a pub to continue chatting. You can sign up by subscribing to this event 🚨IMPORTANT: Please bring a valid form of ID. See you all on the 22nd January 🤩 Happy Networking 🍻 Checkout Meetup Groups run by Confluent:
By attending this event, you agree to abide by our rules of conduct:
|
Data Engineers London: Real Time Data - January 2026
|
|
Data Engineers London: Real Time Data - January 2026
2026-01-22 · 18:00
IMPORTANT: PLEASE RSVP @ https://www.meetup.com/data-engineers-london/events/312450363/ Details 6pm: Doors Open 6:30pm: Talks Start 🗣️The Speakers🗣️ Load-In to Lights-Out: Data Engineering the World's Biggest Tours and Live Events Sam Malcolm, Head of Architecture & Engineering at Centrus (Sam's Linkedin) Sam’s session dives into lessons from large-scale live event data systems—handling over 10 billion data points per second for global tours like Beyoncé, Coldplay, and Glastonbury. He connects the extreme demands of real-time analytics and high-performance networking to modern cloud data practices, showing how the same principles of speed, resilience, and precision apply when designing reliable, scalable data platforms today. Should I Stream or Should I Join: From Regular to Delta Joins in Apache Flink Nicoleta Lazar, Senior Data Engineer at Fresha & Anton Borisov, Principal Engineer at Fresha (Niloceta's LinkedIn , Anton's LinkedIn) Joins in the streaming world are where the fun stops and the tradeoffs start. State that grows forever, latency that spikes unpredictably, watermarks that never quite behave, every Flink developer has war stories about this. In this session, Anton Borisov and Nicoleta Lazar break down the join landscape in Apache Flink: → Regular joins and the state explosion problem → Interval joins: when they work, when they don't → Temporal joins and the versioned table dance → Lookup joins: the escape hatch and its hidden costs → Delta joins: the new kid and how Fluss enables them, and why it matters Talks finish by 8pm and there will be a break between the talks. Afterwards, we may head to a pub to continue chatting. *** If you are interested in speaking at or hosting a meetup, please reach out to [email protected] |
Data Engineers London: Real Time Data - January 2026
|
|
Data Engineers London: Real Time Data - January 2026
2026-01-22 · 18:00
IMPORTANT: PLEASE RSVP @ https://www.meetup.com/data-engineers-london/events/312450363/ Details 6pm: Doors Open 6:30pm: Talks Start 🗣️The Speakers🗣️ Load-In to Lights-Out: Data Engineering the World's Biggest Tours and Live Events Sam Malcolm, Head of Architecture & Engineering at Centrus (Sam's Linkedin) Sam’s session dives into lessons from large-scale live event data systems—handling over 10 billion data points per second for global tours like Beyoncé, Coldplay, and Glastonbury. He connects the extreme demands of real-time analytics and high-performance networking to modern cloud data practices, showing how the same principles of speed, resilience, and precision apply when designing reliable, scalable data platforms today. Should I Stream or Should I Join: From Regular to Delta Joins in Apache Flink Nicoleta Lazar, Senior Data Engineer at Fresha & Anton Borisov, Principal Engineer at Fresha (Niloceta's LinkedIn , Anton's LinkedIn) Joins in the streaming world are where the fun stops and the tradeoffs start. State that grows forever, latency that spikes unpredictably, watermarks that never quite behave, every Flink developer has war stories about this. In this session, Anton Borisov and Nicoleta Lazar break down the join landscape in Apache Flink: → Regular joins and the state explosion problem → Interval joins: when they work, when they don't → Temporal joins and the versioned table dance → Lookup joins: the escape hatch and its hidden costs → Delta joins: the new kid and how Fluss enables them, and why it matters Talks finish by 8pm and there will be a break between the talks. Afterwards, we may head to a pub to continue chatting. *** If you are interested in speaking at or hosting a meetup, please reach out to [email protected] |
Data Engineers London: Real Time Data - January 2026
|
|
Power BI & Fabric Barcelona a Telecos!
2026-01-22 · 17:00
Power BI & Fabric Barcelona a Telecos! Volvemos con un nuevo evento de Power BI & Fabric Barcelona, esta vez con un formato de dos tracks paralelos para adaptarnos a distintos niveles:
Una tarde pensada para aprender, ver casos reales, descubrir nuevas capacidades de la plataforma y cerrar la jornada haciendo networking con la comunidad. 🕒 Agenda 18:00 – 18:15 \| Bienvenida Apertura del evento y bienvenida por parte de la organización. 🟦 Track Iniciación Enfocado a quienes se inician en Power BI o quieren consolidar buenas prácticas desde el principio. 18:20 – 18:50 \| Primers passos amb Power BI – Marcos Majó Boter Sesión introductoria para entender qué es Power BI, cómo se estructura un modelo básico y cómo empezar a crear informes útiles desde el primer día. 18:55 – 19:25 \| 7 Pecados Capitales en Power BI – Carlos Javier Sosa Marquina Un recorrido por los errores más comunes en Power BI, explicados de forma clara y con ejemplos prácticos para aprender a evitarlos. 19:30 – 20:00 \| Después de Navidad llega VertiPaq: ordenación, compresión y rendimiento – Antonio Jurado Cómo funciona VertiPaq por dentro y qué impacto tienen la ordenación y la compresión en el rendimiento de los modelos de Power BI. 🟪 Track Novedades Orientado a perfiles técnicos que quieren profundizar en Microsoft Fabric y sus capacidades avanzadas. 18:20 – 18:50 \| Mi primer notebook de Fabric – Oscar Garzón González Introducción práctica al uso de notebooks en Fabric para empezar a trabajar con datos más allá del DAX. 18:55 – 19:25 \| Designing and Operationalizing Data Agents with Microsoft Fabric – Samantha Cruz Diseño, despliegue y operación de Data Agents en Fabric, con foco en arquitectura y escenarios reales. 19:30 – 20:00 \| Making Fabric Warehouse shine with dbt – Vincent Goller Cómo combinar Fabric Warehouse con dbt para crear modelos analíticos robustos, gobernados y escalables. 20:00 – 21:05 \| Networking Cierre del evento con networking para compartir experiencias, charlar con los ponentes y conectar con otros miembros de la comunidad. Un evento pensado tanto para quienes están empezando con Power BI como para quienes ya trabajan con escenarios avanzados en Fabric. ¡Te esperamos! |
Power BI & Fabric Barcelona a Telecos!
|
|
Data, Compute & AI : L’avenir du temps réel à grande échelle
2026-01-22 · 07:30
>>>Note importante — veuillez vous inscrire à l’événement ICI Rejoignez-nous à Paris pour une matinée consacrée aux architectures et technologies qui permettent de construire la prochaine génération de systèmes de données haute performance. Ce meetup rassemble des experts d’AWS, Aerospike et Adikteev pour explorer comment les plateformes modernes fournissent des données en temps réel, des charges de travail IA extensibles et une résilience native au cloud. Découvrez comment AWS EC2 Graviton redéfinit l’efficacité du calcul, pourquoi la latence P99 détermine les performances réelles de vos applications, et comment Adikteev a réussi à migrer une infrastructure de compteurs à l’échelle du milliard sans aucune interruption de service. Vous verrez également comment l’architecture temps réel et à faible latence d’Aerospike soutient des décisions pilotées par l’IA à l’échelle mondiale. La matinée se conclura par un panel technique, une session de questions-réponses et un temps de networking avec des ingénieurs, architectes et experts qui façonnent l’avenir des systèmes temps réel. Inscrivez-vous dès maintenant et rejoignez la conversation sur la prochaine génération des architectures de données, de calcul et d’IA en temps réel. >>>Note importante — veuillez vous inscrire à l’événement ICI ***Programme de l’événement*** 08:30-09:00 - Boissons et networking 09:00-09:15 - Introduction - Performance, Scalabilité et TCO : Les nouveaux standards de l'infrastructure temps réel Pierre Berard, Regional Manager Southern Europe, Aerospike 09:15-09:35 - Concevoir des architectures performantes avec AWS EC2 Graviton Romain Legret, Specialist Solutions Architect - Efficient Compute, AWS Les clients AWS lancent chaque année des dizaines de milliards d’instances EC2, en choisissant parmi une gamme toujours plus large d’options de calcul, de stockage, de mémoire et de réseau. Cette session présente comment des innovations telles que le système Nitro et les processeurs Graviton déportent certaines tâches vers le matériel, améliorant ainsi les performances et la sécurité. Vous découvrirez comment ces technologies rendent possibles des cas d’usage autrefois inaccessibles pour vos charges de travail. 09:35-10:05 - P99 vous ment Nicolas Wlodarczyk, Sales Engineer, Aerospike Pourquoi les performances de votre application sont dictées par sa transaction la plus lente, et comment PayPal tire parti des transactions en temps réel pour améliorer la détection de la fraude tout en réduisant les coûts. À l’image de PayPal pour la lutte contre la fraude, les environnements modernes reposent de plus en plus sur des architectures distribuées et des micro-services pour faire fonctionner applications et sites web. Nous explorerons des cas concrets issus du monde réel, comme PayPal et TomTom, afin de montrer comment améliorer la latence de fin de distribution (tail latency) tout en réduisant le coût total de possession (TCO) de votre plateforme temps réel. 10:05-10:15 - Migration d’une infrastructure de compteurs à l'échelle du milliard sans interruption de service Seiji Fouquet, Senior Site Reliability Engineer, Adikteev Youcef Sebiat, Data Engineering Team Lead, Adikteev Découvrez comment Adikteev a migré sa base de données de compteurs haute performance de ScyllaDB vers Aerospike sans interruption de service. Nous aborderons les défis techniques liés au déplacement d'une charge de 1M lectures/s et chargement de 300 Go de données quotidiennes, notre stratégie de migration, et les enseignements clés de cette modernisation d'infrastructure. 10:15-10:35 - Faire passer les systèmes temps réel à l’échelle : retours d’expérience du terrain * Seiji Fouquet, Senior Site Reliability Engineer, Adikteev * Youcef Sebiat, Data Engineering Team Lead, Adikteev * Modéré par Pierre Berard, Regional Manager Southern Europe, Aerospike Les responsables de l’ingénierie chez Adikteev partagent leurs retours d’expérience concrets sur l’exploitation et la montée en charge de systèmes de données temps réel en production, en abordant les choix d’architecture, les défis de performance et ce qu’il faut réellement pour opérer des plateformes à faible latence à grande échelle. 10:35-10:45 - Questions / Réponses 10:45-11:15 - Rencontres & échanges >>>Note importante — veuillez vous inscrire à l’événement ICI |
Data, Compute & AI : L’avenir du temps réel à grande échelle
|
|
Jan 14 - Best of NeurIPS
2026-01-14 · 17:00
Welcome to the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined the conference. Live streaming from the authors to you. Jan 14, 2025 9 AM Pacific Online. Register for the Zoom! EgoExOR: An Ego-Exo-Centric Operating Room Dataset for Surgical Activity Understanding Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both. We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures, Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery, EgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from wearable glasses, exocentric RGB and depth from RGB-D cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (568,235 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and human-centric perception. We evaluate the surgical scene graph generation performance of two adapted state-of-the-art models and offer a new baseline that explicitly leverages EgoExOR's multimodal and multi-perspective signals. This new dataset and benchmark set a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception. About the Speaker Ege Özsoy is a last year PhD student researching multimodal computer vision and vision–language models for surgical scene understanding, focusing on semantic scene graphs, multimodality, and ego-exocentric modeling in operating rooms. SANSA: Unleashing the Hidden Semantics in SAM2 for Few-Shot Segmentation Few-shot segmentation requires recognizing novel object categories from only a few annotated examples, demanding both accurate mask generation and strong visual correspondence. While Segment Anything 2 (SAM2) provides powerful prompt-based segmentation and built-in feature matching, its representations are entangled with tracking-specific cues that limit higher-level semantic generalization. We show that SAM2 nonetheless encodes rich latent semantic structure despite its class-agnostic training. To leverage this, we introduce SANSA, a lightweight framework that makes this structure explicit and adapts SAM2 for few-shot segmentation with minimal modifications. SANSA achieves state-of-the-art generalization performance, outperforms generalist in-context methods, supports flexible prompting, and remains significantly faster and smaller than prior approaches. About the Speaker Claudia Cuttano is a PhD student in the VANDAL Lab at Politecnico di Torino and is currently conducting a research visit at TU Darmstadt with Prof. Stefan Roth in the Visual Inference Lab. Her work centers on semantic segmentation, particularly on multi-modal scene understanding and leveraging foundation models for pixel-level vision tasks. Nested Learning: The Illusion of Deep Learning Architectures We present Nested Learning (NL), a new learning paradigm for continual learning that views machine learning models and their training process as a set of nested and/or parallel optimization problems, each of which with its own context flow, frequency of update, and learning algorithm. Based on NL, we design a new architecture, called Hope, that is capable of continual learning and also modifying itself, if it is needed. About the Speaker Ali Behrouz is a Ph.D. student in the Computer Science Department at Cornell University and a research intern at Google Research. His research spans topics from deep learning architectures to continual learning and neuroscience, and appeared at NeurIPS, ICML, KDD, WWW, CHIL, VLDB, ... conferences. His work has been featured with two Best Paper awards, a Best Paper Honorable Mention award, a Best Paper Award candidate, and oral and spotlight presentations. Are VLM Explanations Faithful? A Counterfactual Testing Approach VLMs sound convincing—but are their explanations actually true? This talk introduces Explanation-Driven Counterfactual Testing (EDCT), a simple and model-agnostic method that evaluates whether VLM explanations align with the evidence models truly use. By perturbing the very features a model claims to rely on, EDCT exposes mismatches between stated reasoning and real decision pathways. I will show surprising failure cases across state-of-the-art VLMs and highlight how EDCT can guide more trustworthy explanation methods. About the Speaker Santosh Vasa is a Machine Learning Engineer at Mercedes-Benz R&D North America, working on multimodal perception and VLM safety for autonomous driving. He co-authored the EDCT framework and focuses on explainability, counterfactual testing, and trustworthy AI. |
Jan 14 - Best of NeurIPS
|
|
Jan 14 - Best of NeurIPS
2026-01-14 · 17:00
Welcome to the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined the conference. Live streaming from the authors to you. Jan 14, 2025 9 AM Pacific Online. Register for the Zoom! EgoExOR: An Ego-Exo-Centric Operating Room Dataset for Surgical Activity Understanding Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both. We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures, Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery, EgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from wearable glasses, exocentric RGB and depth from RGB-D cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (568,235 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and human-centric perception. We evaluate the surgical scene graph generation performance of two adapted state-of-the-art models and offer a new baseline that explicitly leverages EgoExOR's multimodal and multi-perspective signals. This new dataset and benchmark set a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception. About the Speaker Ege Özsoy is a last year PhD student researching multimodal computer vision and vision–language models for surgical scene understanding, focusing on semantic scene graphs, multimodality, and ego-exocentric modeling in operating rooms. SANSA: Unleashing the Hidden Semantics in SAM2 for Few-Shot Segmentation Few-shot segmentation requires recognizing novel object categories from only a few annotated examples, demanding both accurate mask generation and strong visual correspondence. While Segment Anything 2 (SAM2) provides powerful prompt-based segmentation and built-in feature matching, its representations are entangled with tracking-specific cues that limit higher-level semantic generalization. We show that SAM2 nonetheless encodes rich latent semantic structure despite its class-agnostic training. To leverage this, we introduce SANSA, a lightweight framework that makes this structure explicit and adapts SAM2 for few-shot segmentation with minimal modifications. SANSA achieves state-of-the-art generalization performance, outperforms generalist in-context methods, supports flexible prompting, and remains significantly faster and smaller than prior approaches. About the Speaker Claudia Cuttano is a PhD student in the VANDAL Lab at Politecnico di Torino and is currently conducting a research visit at TU Darmstadt with Prof. Stefan Roth in the Visual Inference Lab. Her work centers on semantic segmentation, particularly on multi-modal scene understanding and leveraging foundation models for pixel-level vision tasks. Nested Learning: The Illusion of Deep Learning Architectures We present Nested Learning (NL), a new learning paradigm for continual learning that views machine learning models and their training process as a set of nested and/or parallel optimization problems, each of which with its own context flow, frequency of update, and learning algorithm. Based on NL, we design a new architecture, called Hope, that is capable of continual learning and also modifying itself, if it is needed. About the Speaker Ali Behrouz is a Ph.D. student in the Computer Science Department at Cornell University and a research intern at Google Research. His research spans topics from deep learning architectures to continual learning and neuroscience, and appeared at NeurIPS, ICML, KDD, WWW, CHIL, VLDB, ... conferences. His work has been featured with two Best Paper awards, a Best Paper Honorable Mention award, a Best Paper Award candidate, and oral and spotlight presentations. Are VLM Explanations Faithful? A Counterfactual Testing Approach VLMs sound convincing—but are their explanations actually true? This talk introduces Explanation-Driven Counterfactual Testing (EDCT), a simple and model-agnostic method that evaluates whether VLM explanations align with the evidence models truly use. By perturbing the very features a model claims to rely on, EDCT exposes mismatches between stated reasoning and real decision pathways. I will show surprising failure cases across state-of-the-art VLMs and highlight how EDCT can guide more trustworthy explanation methods. About the Speaker Santosh Vasa is a Machine Learning Engineer at Mercedes-Benz R&D North America, working on multimodal perception and VLM safety for autonomous driving. He co-authored the EDCT framework and focuses on explainability, counterfactual testing, and trustworthy AI. |
Jan 14 - Best of NeurIPS
|
|
Jan 14 - Best of NeurIPS
2026-01-14 · 17:00
Welcome to the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined the conference. Live streaming from the authors to you. Jan 14, 2025 9 AM Pacific Online. Register for the Zoom! EgoExOR: An Ego-Exo-Centric Operating Room Dataset for Surgical Activity Understanding Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both. We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures, Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery, EgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from wearable glasses, exocentric RGB and depth from RGB-D cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (568,235 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and human-centric perception. We evaluate the surgical scene graph generation performance of two adapted state-of-the-art models and offer a new baseline that explicitly leverages EgoExOR's multimodal and multi-perspective signals. This new dataset and benchmark set a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception. About the Speaker Ege Özsoy is a last year PhD student researching multimodal computer vision and vision–language models for surgical scene understanding, focusing on semantic scene graphs, multimodality, and ego-exocentric modeling in operating rooms. SANSA: Unleashing the Hidden Semantics in SAM2 for Few-Shot Segmentation Few-shot segmentation requires recognizing novel object categories from only a few annotated examples, demanding both accurate mask generation and strong visual correspondence. While Segment Anything 2 (SAM2) provides powerful prompt-based segmentation and built-in feature matching, its representations are entangled with tracking-specific cues that limit higher-level semantic generalization. We show that SAM2 nonetheless encodes rich latent semantic structure despite its class-agnostic training. To leverage this, we introduce SANSA, a lightweight framework that makes this structure explicit and adapts SAM2 for few-shot segmentation with minimal modifications. SANSA achieves state-of-the-art generalization performance, outperforms generalist in-context methods, supports flexible prompting, and remains significantly faster and smaller than prior approaches. About the Speaker Claudia Cuttano is a PhD student in the VANDAL Lab at Politecnico di Torino and is currently conducting a research visit at TU Darmstadt with Prof. Stefan Roth in the Visual Inference Lab. Her work centers on semantic segmentation, particularly on multi-modal scene understanding and leveraging foundation models for pixel-level vision tasks. Nested Learning: The Illusion of Deep Learning Architectures We present Nested Learning (NL), a new learning paradigm for continual learning that views machine learning models and their training process as a set of nested and/or parallel optimization problems, each of which with its own context flow, frequency of update, and learning algorithm. Based on NL, we design a new architecture, called Hope, that is capable of continual learning and also modifying itself, if it is needed. About the Speaker Ali Behrouz is a Ph.D. student in the Computer Science Department at Cornell University and a research intern at Google Research. His research spans topics from deep learning architectures to continual learning and neuroscience, and appeared at NeurIPS, ICML, KDD, WWW, CHIL, VLDB, ... conferences. His work has been featured with two Best Paper awards, a Best Paper Honorable Mention award, a Best Paper Award candidate, and oral and spotlight presentations. Are VLM Explanations Faithful? A Counterfactual Testing Approach VLMs sound convincing—but are their explanations actually true? This talk introduces Explanation-Driven Counterfactual Testing (EDCT), a simple and model-agnostic method that evaluates whether VLM explanations align with the evidence models truly use. By perturbing the very features a model claims to rely on, EDCT exposes mismatches between stated reasoning and real decision pathways. I will show surprising failure cases across state-of-the-art VLMs and highlight how EDCT can guide more trustworthy explanation methods. About the Speaker Santosh Vasa is a Machine Learning Engineer at Mercedes-Benz R&D North America, working on multimodal perception and VLM safety for autonomous driving. He co-authored the EDCT framework and focuses on explainability, counterfactual testing, and trustworthy AI. |
Jan 14 - Best of NeurIPS
|
|
Jan 14 - Best of NeurIPS
2026-01-14 · 17:00
Welcome to the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined the conference. Live streaming from the authors to you. Jan 14, 2025 9 AM Pacific Online. Register for the Zoom! EgoExOR: An Ego-Exo-Centric Operating Room Dataset for Surgical Activity Understanding Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both. We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures, Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery, EgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from wearable glasses, exocentric RGB and depth from RGB-D cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (568,235 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and human-centric perception. We evaluate the surgical scene graph generation performance of two adapted state-of-the-art models and offer a new baseline that explicitly leverages EgoExOR's multimodal and multi-perspective signals. This new dataset and benchmark set a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception. About the Speaker Ege Özsoy is a last year PhD student researching multimodal computer vision and vision–language models for surgical scene understanding, focusing on semantic scene graphs, multimodality, and ego-exocentric modeling in operating rooms. SANSA: Unleashing the Hidden Semantics in SAM2 for Few-Shot Segmentation Few-shot segmentation requires recognizing novel object categories from only a few annotated examples, demanding both accurate mask generation and strong visual correspondence. While Segment Anything 2 (SAM2) provides powerful prompt-based segmentation and built-in feature matching, its representations are entangled with tracking-specific cues that limit higher-level semantic generalization. We show that SAM2 nonetheless encodes rich latent semantic structure despite its class-agnostic training. To leverage this, we introduce SANSA, a lightweight framework that makes this structure explicit and adapts SAM2 for few-shot segmentation with minimal modifications. SANSA achieves state-of-the-art generalization performance, outperforms generalist in-context methods, supports flexible prompting, and remains significantly faster and smaller than prior approaches. About the Speaker Claudia Cuttano is a PhD student in the VANDAL Lab at Politecnico di Torino and is currently conducting a research visit at TU Darmstadt with Prof. Stefan Roth in the Visual Inference Lab. Her work centers on semantic segmentation, particularly on multi-modal scene understanding and leveraging foundation models for pixel-level vision tasks. Nested Learning: The Illusion of Deep Learning Architectures We present Nested Learning (NL), a new learning paradigm for continual learning that views machine learning models and their training process as a set of nested and/or parallel optimization problems, each of which with its own context flow, frequency of update, and learning algorithm. Based on NL, we design a new architecture, called Hope, that is capable of continual learning and also modifying itself, if it is needed. About the Speaker Ali Behrouz is a Ph.D. student in the Computer Science Department at Cornell University and a research intern at Google Research. His research spans topics from deep learning architectures to continual learning and neuroscience, and appeared at NeurIPS, ICML, KDD, WWW, CHIL, VLDB, ... conferences. His work has been featured with two Best Paper awards, a Best Paper Honorable Mention award, a Best Paper Award candidate, and oral and spotlight presentations. Are VLM Explanations Faithful? A Counterfactual Testing Approach VLMs sound convincing—but are their explanations actually true? This talk introduces Explanation-Driven Counterfactual Testing (EDCT), a simple and model-agnostic method that evaluates whether VLM explanations align with the evidence models truly use. By perturbing the very features a model claims to rely on, EDCT exposes mismatches between stated reasoning and real decision pathways. I will show surprising failure cases across state-of-the-art VLMs and highlight how EDCT can guide more trustworthy explanation methods. About the Speaker Santosh Vasa is a Machine Learning Engineer at Mercedes-Benz R&D North America, working on multimodal perception and VLM safety for autonomous driving. He co-authored the EDCT framework and focuses on explainability, counterfactual testing, and trustworthy AI. |
Jan 14 - Best of NeurIPS
|
|
Jan 14 - Best of NeurIPS
2026-01-14 · 17:00
Welcome to the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined the conference. Live streaming from the authors to you. Jan 14, 2025 9 AM Pacific Online. Register for the Zoom! EgoExOR: An Ego-Exo-Centric Operating Room Dataset for Surgical Activity Understanding Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both. We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures, Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery, EgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from wearable glasses, exocentric RGB and depth from RGB-D cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (568,235 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and human-centric perception. We evaluate the surgical scene graph generation performance of two adapted state-of-the-art models and offer a new baseline that explicitly leverages EgoExOR's multimodal and multi-perspective signals. This new dataset and benchmark set a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception. About the Speaker Ege Özsoy is a last year PhD student researching multimodal computer vision and vision–language models for surgical scene understanding, focusing on semantic scene graphs, multimodality, and ego-exocentric modeling in operating rooms. SANSA: Unleashing the Hidden Semantics in SAM2 for Few-Shot Segmentation Few-shot segmentation requires recognizing novel object categories from only a few annotated examples, demanding both accurate mask generation and strong visual correspondence. While Segment Anything 2 (SAM2) provides powerful prompt-based segmentation and built-in feature matching, its representations are entangled with tracking-specific cues that limit higher-level semantic generalization. We show that SAM2 nonetheless encodes rich latent semantic structure despite its class-agnostic training. To leverage this, we introduce SANSA, a lightweight framework that makes this structure explicit and adapts SAM2 for few-shot segmentation with minimal modifications. SANSA achieves state-of-the-art generalization performance, outperforms generalist in-context methods, supports flexible prompting, and remains significantly faster and smaller than prior approaches. About the Speaker Claudia Cuttano is a PhD student in the VANDAL Lab at Politecnico di Torino and is currently conducting a research visit at TU Darmstadt with Prof. Stefan Roth in the Visual Inference Lab. Her work centers on semantic segmentation, particularly on multi-modal scene understanding and leveraging foundation models for pixel-level vision tasks. Nested Learning: The Illusion of Deep Learning Architectures We present Nested Learning (NL), a new learning paradigm for continual learning that views machine learning models and their training process as a set of nested and/or parallel optimization problems, each of which with its own context flow, frequency of update, and learning algorithm. Based on NL, we design a new architecture, called Hope, that is capable of continual learning and also modifying itself, if it is needed. About the Speaker Ali Behrouz is a Ph.D. student in the Computer Science Department at Cornell University and a research intern at Google Research. His research spans topics from deep learning architectures to continual learning and neuroscience, and appeared at NeurIPS, ICML, KDD, WWW, CHIL, VLDB, ... conferences. His work has been featured with two Best Paper awards, a Best Paper Honorable Mention award, a Best Paper Award candidate, and oral and spotlight presentations. Are VLM Explanations Faithful? A Counterfactual Testing Approach VLMs sound convincing—but are their explanations actually true? This talk introduces Explanation-Driven Counterfactual Testing (EDCT), a simple and model-agnostic method that evaluates whether VLM explanations align with the evidence models truly use. By perturbing the very features a model claims to rely on, EDCT exposes mismatches between stated reasoning and real decision pathways. I will show surprising failure cases across state-of-the-art VLMs and highlight how EDCT can guide more trustworthy explanation methods. About the Speaker Santosh Vasa is a Machine Learning Engineer at Mercedes-Benz R&D North America, working on multimodal perception and VLM safety for autonomous driving. He co-authored the EDCT framework and focuses on explainability, counterfactual testing, and trustworthy AI. |
Jan 14 - Best of NeurIPS
|
|
Jan 14 - Best of NeurIPS
2026-01-14 · 17:00
Welcome to the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined the conference. Live streaming from the authors to you. Jan 14, 2025 9 AM Pacific Online. Register for the Zoom! EgoExOR: An Ego-Exo-Centric Operating Room Dataset for Surgical Activity Understanding Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both. We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures, Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery, EgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from wearable glasses, exocentric RGB and depth from RGB-D cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (568,235 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and human-centric perception. We evaluate the surgical scene graph generation performance of two adapted state-of-the-art models and offer a new baseline that explicitly leverages EgoExOR's multimodal and multi-perspective signals. This new dataset and benchmark set a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception. About the Speaker Ege Özsoy is a last year PhD student researching multimodal computer vision and vision–language models for surgical scene understanding, focusing on semantic scene graphs, multimodality, and ego-exocentric modeling in operating rooms. SANSA: Unleashing the Hidden Semantics in SAM2 for Few-Shot Segmentation Few-shot segmentation requires recognizing novel object categories from only a few annotated examples, demanding both accurate mask generation and strong visual correspondence. While Segment Anything 2 (SAM2) provides powerful prompt-based segmentation and built-in feature matching, its representations are entangled with tracking-specific cues that limit higher-level semantic generalization. We show that SAM2 nonetheless encodes rich latent semantic structure despite its class-agnostic training. To leverage this, we introduce SANSA, a lightweight framework that makes this structure explicit and adapts SAM2 for few-shot segmentation with minimal modifications. SANSA achieves state-of-the-art generalization performance, outperforms generalist in-context methods, supports flexible prompting, and remains significantly faster and smaller than prior approaches. About the Speaker Claudia Cuttano is a PhD student in the VANDAL Lab at Politecnico di Torino and is currently conducting a research visit at TU Darmstadt with Prof. Stefan Roth in the Visual Inference Lab. Her work centers on semantic segmentation, particularly on multi-modal scene understanding and leveraging foundation models for pixel-level vision tasks. Nested Learning: The Illusion of Deep Learning Architectures We present Nested Learning (NL), a new learning paradigm for continual learning that views machine learning models and their training process as a set of nested and/or parallel optimization problems, each of which with its own context flow, frequency of update, and learning algorithm. Based on NL, we design a new architecture, called Hope, that is capable of continual learning and also modifying itself, if it is needed. About the Speaker Ali Behrouz is a Ph.D. student in the Computer Science Department at Cornell University and a research intern at Google Research. His research spans topics from deep learning architectures to continual learning and neuroscience, and appeared at NeurIPS, ICML, KDD, WWW, CHIL, VLDB, ... conferences. His work has been featured with two Best Paper awards, a Best Paper Honorable Mention award, a Best Paper Award candidate, and oral and spotlight presentations. Are VLM Explanations Faithful? A Counterfactual Testing Approach VLMs sound convincing—but are their explanations actually true? This talk introduces Explanation-Driven Counterfactual Testing (EDCT), a simple and model-agnostic method that evaluates whether VLM explanations align with the evidence models truly use. By perturbing the very features a model claims to rely on, EDCT exposes mismatches between stated reasoning and real decision pathways. I will show surprising failure cases across state-of-the-art VLMs and highlight how EDCT can guide more trustworthy explanation methods. About the Speaker Santosh Vasa is a Machine Learning Engineer at Mercedes-Benz R&D North America, working on multimodal perception and VLM safety for autonomous driving. He co-authored the EDCT framework and focuses on explainability, counterfactual testing, and trustworthy AI. |
Jan 14 - Best of NeurIPS
|
|
Jan 14 - Best of NeurIPS
2026-01-14 · 17:00
Welcome to the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined the conference. Live streaming from the authors to you. Jan 14, 2025 9 AM Pacific Online. Register for the Zoom! EgoExOR: An Ego-Exo-Centric Operating Room Dataset for Surgical Activity Understanding Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both. We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures, Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery, EgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from wearable glasses, exocentric RGB and depth from RGB-D cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (568,235 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and human-centric perception. We evaluate the surgical scene graph generation performance of two adapted state-of-the-art models and offer a new baseline that explicitly leverages EgoExOR's multimodal and multi-perspective signals. This new dataset and benchmark set a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception. About the Speaker Ege Özsoy is a last year PhD student researching multimodal computer vision and vision–language models for surgical scene understanding, focusing on semantic scene graphs, multimodality, and ego-exocentric modeling in operating rooms. SANSA: Unleashing the Hidden Semantics in SAM2 for Few-Shot Segmentation Few-shot segmentation requires recognizing novel object categories from only a few annotated examples, demanding both accurate mask generation and strong visual correspondence. While Segment Anything 2 (SAM2) provides powerful prompt-based segmentation and built-in feature matching, its representations are entangled with tracking-specific cues that limit higher-level semantic generalization. We show that SAM2 nonetheless encodes rich latent semantic structure despite its class-agnostic training. To leverage this, we introduce SANSA, a lightweight framework that makes this structure explicit and adapts SAM2 for few-shot segmentation with minimal modifications. SANSA achieves state-of-the-art generalization performance, outperforms generalist in-context methods, supports flexible prompting, and remains significantly faster and smaller than prior approaches. About the Speaker Claudia Cuttano is a PhD student in the VANDAL Lab at Politecnico di Torino and is currently conducting a research visit at TU Darmstadt with Prof. Stefan Roth in the Visual Inference Lab. Her work centers on semantic segmentation, particularly on multi-modal scene understanding and leveraging foundation models for pixel-level vision tasks. Nested Learning: The Illusion of Deep Learning Architectures We present Nested Learning (NL), a new learning paradigm for continual learning that views machine learning models and their training process as a set of nested and/or parallel optimization problems, each of which with its own context flow, frequency of update, and learning algorithm. Based on NL, we design a new architecture, called Hope, that is capable of continual learning and also modifying itself, if it is needed. About the Speaker Ali Behrouz is a Ph.D. student in the Computer Science Department at Cornell University and a research intern at Google Research. His research spans topics from deep learning architectures to continual learning and neuroscience, and appeared at NeurIPS, ICML, KDD, WWW, CHIL, VLDB, ... conferences. His work has been featured with two Best Paper awards, a Best Paper Honorable Mention award, a Best Paper Award candidate, and oral and spotlight presentations. Are VLM Explanations Faithful? A Counterfactual Testing Approach VLMs sound convincing—but are their explanations actually true? This talk introduces Explanation-Driven Counterfactual Testing (EDCT), a simple and model-agnostic method that evaluates whether VLM explanations align with the evidence models truly use. By perturbing the very features a model claims to rely on, EDCT exposes mismatches between stated reasoning and real decision pathways. I will show surprising failure cases across state-of-the-art VLMs and highlight how EDCT can guide more trustworthy explanation methods. About the Speaker Santosh Vasa is a Machine Learning Engineer at Mercedes-Benz R&D North America, working on multimodal perception and VLM safety for autonomous driving. He co-authored the EDCT framework and focuses on explainability, counterfactual testing, and trustworthy AI. |
Jan 14 - Best of NeurIPS
|
|
Jan 14 - Best of NeurIPS
2026-01-14 · 17:00
Welcome to the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined the conference. Live streaming from the authors to you. Jan 14, 2025 9 AM Pacific Online. Register for the Zoom! EgoExOR: An Ego-Exo-Centric Operating Room Dataset for Surgical Activity Understanding Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both. We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures, Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery, EgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from wearable glasses, exocentric RGB and depth from RGB-D cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (568,235 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and human-centric perception. We evaluate the surgical scene graph generation performance of two adapted state-of-the-art models and offer a new baseline that explicitly leverages EgoExOR's multimodal and multi-perspective signals. This new dataset and benchmark set a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception. About the Speaker Ege Özsoy is a last year PhD student researching multimodal computer vision and vision–language models for surgical scene understanding, focusing on semantic scene graphs, multimodality, and ego-exocentric modeling in operating rooms. SANSA: Unleashing the Hidden Semantics in SAM2 for Few-Shot Segmentation Few-shot segmentation requires recognizing novel object categories from only a few annotated examples, demanding both accurate mask generation and strong visual correspondence. While Segment Anything 2 (SAM2) provides powerful prompt-based segmentation and built-in feature matching, its representations are entangled with tracking-specific cues that limit higher-level semantic generalization. We show that SAM2 nonetheless encodes rich latent semantic structure despite its class-agnostic training. To leverage this, we introduce SANSA, a lightweight framework that makes this structure explicit and adapts SAM2 for few-shot segmentation with minimal modifications. SANSA achieves state-of-the-art generalization performance, outperforms generalist in-context methods, supports flexible prompting, and remains significantly faster and smaller than prior approaches. About the Speaker Claudia Cuttano is a PhD student in the VANDAL Lab at Politecnico di Torino and is currently conducting a research visit at TU Darmstadt with Prof. Stefan Roth in the Visual Inference Lab. Her work centers on semantic segmentation, particularly on multi-modal scene understanding and leveraging foundation models for pixel-level vision tasks. Nested Learning: The Illusion of Deep Learning Architectures We present Nested Learning (NL), a new learning paradigm for continual learning that views machine learning models and their training process as a set of nested and/or parallel optimization problems, each of which with its own context flow, frequency of update, and learning algorithm. Based on NL, we design a new architecture, called Hope, that is capable of continual learning and also modifying itself, if it is needed. About the Speaker Ali Behrouz is a Ph.D. student in the Computer Science Department at Cornell University and a research intern at Google Research. His research spans topics from deep learning architectures to continual learning and neuroscience, and appeared at NeurIPS, ICML, KDD, WWW, CHIL, VLDB, ... conferences. His work has been featured with two Best Paper awards, a Best Paper Honorable Mention award, a Best Paper Award candidate, and oral and spotlight presentations. Are VLM Explanations Faithful? A Counterfactual Testing Approach VLMs sound convincing—but are their explanations actually true? This talk introduces Explanation-Driven Counterfactual Testing (EDCT), a simple and model-agnostic method that evaluates whether VLM explanations align with the evidence models truly use. By perturbing the very features a model claims to rely on, EDCT exposes mismatches between stated reasoning and real decision pathways. I will show surprising failure cases across state-of-the-art VLMs and highlight how EDCT can guide more trustworthy explanation methods. About the Speaker Santosh Vasa is a Machine Learning Engineer at Mercedes-Benz R&D North America, working on multimodal perception and VLM safety for autonomous driving. He co-authored the EDCT framework and focuses on explainability, counterfactual testing, and trustworthy AI. |
Jan 14 - Best of NeurIPS
|
|
The State Of Marketing Measurement In 2025
2025-12-16 · 17:00
🎙️ Speakers: Thomas Wiecki, Niall Oulton, Tim McWilliams, Carlos Trujillo, Kemble Fletcher, Evan Wimpey \| ⏰ Time: 16:00 UTC / 9:00 AM PT / 12:00 PM ET / 5:00 PM Berlin Marketing measurement is evolving faster than most teams can keep up, and 2025 pushed every model, method, and assumption to its limits. With shifting budgets, new privacy pressures, and a market full of hype disguised as innovation, the real question is: what actually worked? In this session, the PyMC Labs team opens the curtain on what we learned from working hands-on with some of the world’s leading brands, across MMM, CLV, forecasting, causal inference, generative AI, and fully custom Bayesian models. Instead of polished slides or scripted talking points, this roundtable is a guided, honest conversation about what this year revealed, and what 2026 will demand from marketing leaders. Drawing from dozens of real client engagements, model builds, and experiments, you’ll see how our team approached this year’s hardest measurement problems, where the industry is heading, and how to think more clearly about marketing effectiveness in a chaotic environment. You’ll learn:
Join us for a sharp, candid, and practitioner-led discussion that surfaces the lessons, surprises, and strategies shaping smarter marketing decisions, not theory, but what we’ve seen in the trenches. 📜 Outline of Talk / Agenda:
💼 About the speakers: Thomas Wiecki (Founder of PyMC Labs) Dr. Thomas Wiecki is an author of PyMC, the leading platform for statistical data science. To help businesses solve some of their trickiest data science problems, he assembled a world-class team of Bayesian modelers and founded PyMC Labs -- the Bayesian consultancy. He did his PhD at Brown University studying cognitive neuroscience. 🔗 Connect with Thomas: 👉 Linkedin: https://www.linkedin.com/in/twiecki/ 👉 Website: https://www.pymc-labs.com/ https://twiecki.io/ 👉 GitHub: https://github.com/twiecki 👉 Twitter: https://twitter.com/twiecki Niall Oulton (Vice President of Sales - PyMC Labs) Niall Oulton has built a reputation as a leading expert in the field of marketing analytics, with a specialization in Bayesian Marketing Mix Modelling. His career, spanning over a decade, has seen him on both sides of the business landscape - agency and client. His rich background provides him with a unique perspective, making him an expert in understanding and navigating the complexities of both worlds. 🔗 Connect with Niall: 👉 LinkedIn: https://www.linkedin.com/in/nialloulton20/ 👉 Twitter: https://twitter.com/niall20 👉 GitHub: https://github.com/nialloulton 👉 Website: https://1749.io/ Tim McWilliams (Principal Data Scientist - PyMC Labs) With over 7 years of experience in the marketing mix modeling and marketing analytics space, Tim specializes in applying Bayesian modeling techniques to solve complex business challenges and uncover actionable insights. Passionate about bridging advanced statistical methods with real-world marketing strategy, he has worked across diverse industries to optimize media investments and measure impact. 🔗 Connect with Tim: 👉 LinkedIn: https://www.linkedin.com/in/tim-mcwilliams-a4b647b3/ 👉 Github: https://github.com/timbo112711 Kemble Fletcher (Director of Product Development - PyMC Labs) Before joining PyMC Labs, Kemble co-founded SweepLift and co-invented its patent-pending in-stream survey and measurement technology. He later led omnichannel attribution and measurement strategy at Google for its top 300 global clients, influencing $2B in ARR. Prior to that, he drove digital analytics and predictive modeling at OMD for brands like Levi’s, Hilton, and eHarmony. He also advises SaaS and start-up leaders on data architecture, attribution, and growth. At PyMC Labs, Kemble helps organizations solve complex challenges through advanced Bayesian modeling. 🔗 Connect with Kemble: 👉 LinkedIn: https://www.linkedin.com/in/kemblefletcher/ Carlos Trujillo (Principal Data Scientist - PyMC Labs) Carlos is a Marketing Scientist passionate about using data and AI to turn marketing strategy into measurable results. He’s worked with teams across Latin America, Europe, and Africa, including roles at Wise, Bolt, and Omnicom Media Group. As a core member of PyMC Labs, he contributes to open-source projects like PyMC-Marketing, blending statistical rigor with practical marketing insight. 🔗 Connect with Carlos: 👉 LinkedIn: https://www.linkedin.com/in/cetagostini/ 👉 Github: https://github.com/cetagostini 💼 About the Host: Evan Wimpey (Director of Analytics at PyMC Labs) Evan helps clients design Bayesian solutions tailored to their goals, ensuring they understand both the how and why of inference. With master’s degrees in Economics and Analytics, he focuses on delivering clear value throughout projects and brings a unique twist with his background in data comedy. 🔗 Connect with Evan: 👉 Linkedin: https://www.linkedin.com/in/evan-wimpey/ 👉 GitHub: https://github.com/ewimpey 📖 Code of Conduct: Please note that participants are expected to abide by PyMC's Code of Conduct. 🔗 Connecting with PyMC Labs: 🌐 Website: https://www.pymc-labs.com/ 👥 LinkedIn: https://www.linkedin.com/company/pymc-labs/ 🐦 Twitter: https://twitter.com/pymc_labs 🎥 YouTube: https://www.youtube.com/c/PyMCLabs 🤝 Meetup: https://www.meetup.com/pymc-labs-online-meetup/ 🎮 Discord: https://discord.gg/mTc64cAz |
The State Of Marketing Measurement In 2025
|
|
L'IA Café Club Besançon - Édition n°1 ☕️
2025-12-09 · 18:00
L’IA Café Club : L’Intelligence Artificielle s’invite à Besançon ! 📍 Pour la toute première fois, L’IA Café Club débarque à Besançon pour une soirée placée sous le signe de l’innovation, de la découverte et de la convivialité. Et pour cette édition, on t’accueille dans un lieu aussi chaleureux qu’authentique : Le Comptoir Général, un espace hybride propice aux échanges, situé en plein cœur de la ville. 🤖 Une soirée 100 % IA, 100 % accessible organisé par Adrien Ramelet Tu t’intéresses à l’intelligence artificielle ? Tu veux comprendre comment elle peut vraiment t’être utile au quotidien ? Ou t’as juste envie de passer une bonne soirée et de rencontrer du monde autour d’un verre ? Alors cette soirée est faite pour toi ! 🧠 Au menu : 🎙️ 3 talks courts (12 min max) pour t’ouvrir l’esprit sans t’assommer 🍻 Apéro & networking dans une ambiance détendue 💬 Échanges libres autour des usages concrets de l’IA, avec des intervenants accessibles Nos intervenants du soir : - Adrien Ramelet : "Les actus de l’IA" – un tour d’horizon des dernières avancées à ne pas rater - Ciprian Melian : "Dicte AI : l’IA pour vos réunions et rendez-vous" – comment automatiser la prise de notes et gagner en efficacité - Nicolas Drolo : "Créer un podcast avec l’IA" – de l’idée à la diffusion\, une nouvelle manière de produire du contenu - Nicolas Grangeot : La bureautique augmentée avec l'IA 🌟 Pourquoi venir ? C’est gratuit et ouvert à tous Tu repars avec des idées concrètes, testables dès le lendemain Tu rencontres des gens cools et passionnés (ou juste curieux) Le tout dans un cadre chaleureux et bienveillant, sans prise de tête 📍 Lieu & horaires : 🗓️ Date : Mardi 9 décembre 2025 🕖 Heure : 19h00 – 22h00 📌 Lieu : Le Comptoir Général, 14 rue d’Alsace, Besançon |
L'IA Café Club Besançon - Édition n°1 ☕️
|
|
PyLadies Conference Watch Party!
2025-12-06 · 09:00
We are so excited to invite everyone to join our PyLadies Watch Party — a day dedicated to community, learning, and celebration! Whether you’re a longtime PyLady, a Python enthusiast, or simply curious about the community, you’re warmly welcome to attend. We'll gather together to watch inspiring talks, connect with each other, and enjoy snacks, drinks, and great conversations throughout the day.REGITER HERE: https://luma.com/hjdydqeq Agenda
🌍 About the PyLadies Conference The PyLadies Conference (PyLadiesCon) is an exciting, online, and completely free global event focused on empowerment, learning, and diversity within the Python community! 🎉 PyLadiesCon was created to bring together PyLadies chapters and members from all around the world, eliminating travel barriers and making it possible for everyone to participate. By gathering virtually as one global community, we create space for meaningful discussions, collaboration, the birth of new local groups, and fresh ideas to strengthen and grow our shared network. This conference is designed to uplift and highlight diverse voices. All members and supporters of the PyLadies community are encouraged to share their knowledge—no prior speaking experience required! If you have an insight, a story, or expertise to offer, this is the place to do it. 🧡 About our Host A11 We are a company dedicated to building unicorns, combining deep expertise in technology, growth, and operations to help ambitious businesses scale. We are a hands-on partner for high-growth companies, offering support across Data & Analytics, Engineering, Product & Design, Go-to-Market strategy, Marketing, Brand, and Talent. With a track record of scaling tech organizations and accelerating business performance, we see ourselves as a multidisciplinary growth engine designed to take startups and scale-ups to the next level. 💗 Code of Conduct We are dedicated to providing a safe and welcoming experience for everyone who participates in our events. By attending our event, you agree to the PyLadies Code of Conduct: https://www.pyladies.com/CodeOfConduct/ 📸 Media Consent We may be taking photos of this event for social media posts. If you do not want to be photographed, please let the organizers at the event know. We will make sure to respect your privacy. |
PyLadies Conference Watch Party!
|
|
🏳️🌈🦄 Unicorns In Tech Meetup @ Deutsche Bank's Berlin Technology Centre
2025-11-27 · 17:00
➡️ Ready to connect? Sign up here! Unicorns in Tech are coming to Deutsche Bank! 🦄🏳️🌈 Join Europe's largest LGBTIQ+ tech community and allies for our next Get-Together, hosted at the Deutsche Bank Berlin Technology Centre. It's a welcoming space to connect, share experiences, and celebrate 25 years of #dbPride! |
🏳️🌈🦄 Unicorns In Tech Meetup @ Deutsche Bank's Berlin Technology Centre
|
|
Building Resilient Data Pipelines for Embedded Analytics
2025-11-26 · 19:00
Martha Scheffler
– Data Engineer
@ Qarma
Discover how Qarma built a resilient data platform architecture using blue-green deployment strategies powered by Snowflake zero-copy clones and dbt macros. This talk also dives into orchestrating multiple dbt projects with Kestra—from serial execution for ingestion pipelines to parallel processing for scalable analytics delivery. |
|
|
Building a Scalable Data Mesh with dbt and Snowflake at Velux
2025-11-26 · 18:30
Thomas Schrum Nicolet
– Platform Engineer
@ Velux
Learn how Velux is building a scalable data mesh from the ground up using dbt to design and model data products in Snowflake. This session explores how dbt serves as the single source of truth, and how the team uses custom and enhanced macros to simplify data engineering workflows and accelerate data delivery. |
|
|
julien dubois
– Head of Java team, Developer Relations
@ Microsoft
A propos du talk : Les agents IA sont des programmes qui agissent de manière autonome : pour cela, ils doivent être capables de communiquer de manière programmatique avec une IA, et d'effectuer des actions.Dans cette session, allons voir:Les Structured Outputs : comment obliger une IA à répondre en suivant un schéma JSON, de manière à pouvoir mapper ce résultat avec des objet JavaLe Function Calling : comment définir et appeler des functions Java depuis un modèle IAMCP: le nouveau protocole qui standardise comment les LLM communiquent avec différentes sources de données et outilsNous utiliserons le code, les démos et la documentation que j'ai réalisés pour implémenter ces fonctionnalités dans LangChain4j en utilisant le tout nouveau SDK Java développé par OpenAI. |
Takimeet #8
|