talk-data.com
People (515 results)
See all 515 →Activities & events
| Title & Speakers | Event |
|---|---|
|
Can Humans Flourish in the Age of AI?
2026-01-27 · 17:45
David Watson
– Lecturer
@ King's College London
David Watson discusses 'Can Humans Flourish in the Age of AI?' (approximately 45 minutes). |
#29 AI Series: King's College London - D. Watson
|
|
HPC @ SURF
2026-01-19 · 15:30
Every 2 months, we'll visit a company or institute that will host the event and provide talks. This time, we’re visiting SURF. You may know SURF as the organization behind the Snellius supercomputer, but its role is much broader. SURF is the IT cooperative for education and research in the Netherlands, offering advanced services for computing, storage, networking, and much more. If you want to get a sense of their work, check out the SURF tag on Tweakers or visit their website at surf.nl. The talks are the following:
For those who visited the previous meetup at Stream HPC, you already know what to expect: deep dives into technical subjects, snacks and drinks, and enough time to socialize. Important Would you like to join the Data Center tour at SURF before the event? Please register below. Please note that the registration closes on 14.01.26 at 17:00. The Data Center tour will start at 15:30 sharp. [registration link] |
HPC @ SURF
|
|
Jan 15 - Best of NeurIPS (Day 2)
2026-01-15 · 17:00
Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Time and Location Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom! Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes. About the Author Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties. Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery. About the Speaker Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring. GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results. Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively. We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions. About the Speaker Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/ HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction. About the Speaker Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI. |
Jan 15 - Best of NeurIPS (Day 2)
|
|
Jan 15 - Best of NeurIPS (Day 2)
2026-01-15 · 17:00
Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Time and Location Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom! Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes. About the Author Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties. Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery. About the Speaker Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring. GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results. Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively. We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions. About the Speaker Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/ HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction. About the Speaker Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI. |
Jan 15 - Best of NeurIPS (Day 2)
|
|
Jan 15 - Best of NeurIPS (Day 2)
2026-01-15 · 17:00
Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Time and Location Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom! Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes. About the Author Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties. Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery. About the Speaker Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring. GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results. Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively. We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions. About the Speaker Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/ HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction. About the Speaker Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI. |
Jan 15 - Best of NeurIPS (Day 2)
|
|
Jan 15 - Best of NeurIPS (Day 2)
2026-01-15 · 17:00
Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Time and Location Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom! Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes. About the Author Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties. Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery. About the Speaker Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring. GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results. Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively. We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions. About the Speaker Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/ HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction. About the Speaker Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI. |
Jan 15 - Best of NeurIPS (Day 2)
|
|
Jan 15 - Best of NeurIPS (Day 2)
2026-01-15 · 17:00
Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Time and Location Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom! Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes. About the Author Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties. Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery. About the Speaker Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring. GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results. Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively. We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions. About the Speaker Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/ HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction. About the Speaker Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI. |
Jan 15 - Best of NeurIPS (Day 2)
|
|
Jan 15 - Best of NeurIPS (Day 2)
2026-01-15 · 17:00
Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Time and Location Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom! Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes. About the Author Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties. Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery. About the Speaker Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring. GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results. Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively. We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions. About the Speaker Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/ HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction. About the Speaker Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI. |
Jan 15 - Best of NeurIPS (Day 2)
|
|
Jan 15 - Best of NeurIPS (Day 2)
2026-01-15 · 17:00
Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Time and Location Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom! Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes. About the Author Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties. Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery. About the Speaker Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring. GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results. Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively. We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions. About the Speaker Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/ HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction. About the Speaker Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI. |
Jan 15 - Best of NeurIPS (Day 2)
|
|
Jan 15 - Best of NeurIPS (Day 2)
2026-01-15 · 17:00
Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Time and Location Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom! Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes. About the Author Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties. Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery. About the Speaker Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring. GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results. Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively. We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions. About the Speaker Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/ HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction. About the Speaker Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI. |
Jan 15 - Best of NeurIPS (Day 2)
|
|
Applied AI: Navigating Legacy Systems and Building Agentic Workflows
2026-01-15 · 16:45
For our first meetup of 2026, we're bringing you two deeply technical stories from the front lines of applied AI, together with AI Native Netherlands. We'll hear how the ANWB navigates the challenges of imperfect data in a legacy organization, and then dive into a practical guide for building production-grade AI agentic workflows with Elastic. We’ll cover:
Speakers 1: Yke Rusticus & David Brummer (ANWB) Yke is a data engineer at ANWB with a background in astronomy and artificial intelligence. In the industry, he learned that AI models and algorithms often do not get past the experimentation phase, leading him to specialise in MLOps to bridge the gap between experimentation and production. As a professional in this field, Yke has developed ML platforms and use cases across different cloud providers, and is passionate about sharing his knowledge through tutorials and trainings. David is a self-acclaimed “not your typical Data Scientist” who loves analogue photography, vegan food, dogs, and holds an unofficial PhD in thrifting and sourcing second-hand pearls. With a background in growth hacking and experience in the digital marketing trenches of a startup, a scale-up, and a digital agency, he now brings together lean startup thinking, marketing know-how, and sales pitches, blending it all with a passion for creativity and tech at the ANWB. As a bridge between business and data, David focuses on building AI solutions that don’t just work, but actually get used. Talk: How AI is helping you back on the road We learn at school what AI can do when the data is perfect. We learn at conferences what AI can do when the environment is perfect. In this talk, you'll learn what AI can do when neither is perfect. This story is about the process of overcoming these challenges in an organisation that has been around since the invention of the bike. We'll balance the technical aspect of these solutions with the human aspect throughout the talk. Because in the end, it's not actually AI helping you back on the road, it's people. Speaker 2: Hans Heerooms (Elastic) Hans Heerooms is a Senior Solutions Architect at Elastic. He has worked in various roles, but always with one objective: helping organisations to get the most out of their data with the least amount of effort. His current role at Elastic is all about supporting Elastic’s customers to help them evolve from data driven decisions to AI guided workflows. Talk: Building Production-Grade AI Agentic Workflows with Elastic This talk tells and shows how Elastic Agent Builder can help to build and implement agentic workflows. It addresses the complexity of traditional development by integrating all necessary components—LLM orchestration, vector database, tracing, and security—directly into the Elasticsearch Search AI Platform. This talk will show you how to build custom agents, declare and assign tools, and start conversations with your data. Agenda: 17:45 — Arrival, food & drinks 18:30 — Talk #1 \| Yke & David (ANWB) 19:15 — Short break 19:30 — Talk #2 \| Hans Heerooms (Elastic) 20:15 — Open conversation, networking & more drinks 21:00 — Wrapping up Please note that the main door will close at 18.00. You will still be able to enter our office, but we might ask you to wait a little bit while we come down to open the door for you. What to bring: Just curiosity and questions. If you're working on MLOps, applied AI, or building agentic workflows, we’d love to hear your thoughts. Who this is for: Data scientists, AI/ML engineers, data engineers, MLOps specialists, SREs, architects, and engineering leaders focused on building and using real-world AI solutions. Where to find us: Elastic's office in Amsterdam Keizersgracht 281, 1016 ED Amsterdam |
Applied AI: Navigating Legacy Systems and Building Agentic Workflows
|
|
Applied AI: Navigating Legacy Systems and Building Agentic Workflows
2026-01-15 · 16:45
Hi everyone, Many of you asked for more practical, real-world AI use-cases, and we listened! For our first meetup of 2026, we're bringing you two deeply technical stories from the front lines of applied AI. We'll hear how the ANWB navigates the challenges of imperfect data in a legacy organization, and then dive into a practical guide for building production-grade AI agentic workflows with Elastic. A huge thank you to our friends at Elastic for hosting us at their Amsterdam office. Food and drinks will be provided! We’ll cover:
Speakers 1: Yke Rusticus & David Brummer (ANWB) Yke is a data engineer at ANWB with a background in astronomy and artificial intelligence. In the industry, he learned that AI models and algorithms often do not get past the experimentation phase, leading him to specialise in MLOps to bridge the gap between experimentation and production. As a professional in this field, Yke has developed ML platforms and use cases across different cloud providers, and is passionate about sharing his knowledge through tutorials and trainings. David is a self-acclaimed “not your typical Data Scientist” who loves analogue photography, vegan food, dogs, and holds an unofficial PhD in thrifting and sourcing second-hand pearls. With a background in growth hacking and experience in the digital marketing trenches of a startup, a scale-up, and a digital agency, he now brings together lean startup thinking, marketing know-how, and sales pitches, blending it all with a passion for creativity and tech at the ANWB. As a bridge between business and data, David focuses on building AI solutions that don’t just work, but actually get used. Talk: How AI is helping you back on the road We learn at school what AI can do when the data is perfect. We learn at conferences what AI can do when the environment is perfect. In this talk, you'll learn what AI can do when neither is perfect. This story is about the process of overcoming these challenges in an organisation that has been around since the invention of the bike. We'll balance the technical aspect of these solutions with the human aspect throughout the talk. Because in the end, it's not actually AI helping you back on the road, it's people. Speaker 2: Hans Heerooms (Elastic) Hans Heerooms is a Senior Solutions Architect at Elastic. He has worked in various roles, but always with one objective: helping organisations to get the most out of their data with the least amount of effort. His current role at Elastic is all about supporting Elastic’s customers to help them evolve from data driven decisions to AI guided workflows. Talk: Building Production-Grade AI Agentic Workflows with Elastic This talk tells and shows how Elastic Agent Builder can help to build and implement agentic workflows. It addresses the complexity of traditional development by integrating all necessary components—LLM orchestration, vector database, tracing, and security—directly into the Elasticsearch Search AI Platform. This talk will show you how to build custom agents, declare and assign tools, and start conversations with your data. Agenda: 17:45 — Arrival, food & drinks 18:30 — Talk #1 \| Yke & David (ANWB) 19:15 — Short break 19:30 — Talk #2 \| Hans Heerooms (Elastic) 20:15 — Open conversation, networking & more drinks 21:00 — Wrapping up Please note that the main door will close at 18.00. You will still be able to enter our office, but we might ask you to wait a little bit while we come down to open the door for you. What to bring: Just curiosity and questions. If you're working on MLOps, applied AI, or building agentic workflows, we’d love to hear your thoughts. Who this is for: Data scientists, AI/ML engineers, data engineers, MLOps specialists, SREs, architects, and engineering leaders focused on building and using real-world AI solutions. Where to find us: Elastic Amsterdam Keizersgracht 281, 1016 ED Amsterdam |
Applied AI: Navigating Legacy Systems and Building Agentic Workflows
|
|
SQL Engineering Connection Series - SQL 2025 features & AI-readiness
2025-12-18 · 17:00
|
SQL Engineering Connection Series - SQL 2025 features & AI-readiness
|
|
SQL Server 2025 - Enterprise, Engine, and AI-Ready Features
2025-12-18 · 17:00
SQL Community Engineering Connection Series: Episode 3 All about SQL Server 2025 enterprise-ready new features, as well as an architecture overview of AI-ready features. Speakers: Davide Mauri and Raj Pochiraju This series is all about connection bringing Microsoft's product experts and the SQL community together to share knowledge, answer questions, and help you stay up to date on the latest in the SQL and data ecosystem. Join Link: https://teams.microsoft.com/meet/2914607474379?p=x3u2ymC3IvLoLi18ZO Meeting ID: 291 460 747 437 9 Passcode: NB9W7RP3 |
SQL Server 2025 - Enterprise, Engine, and AI-Ready Features
|
|
This Thursday: Join Us for the SQL Community Engineering Connection!
2025-12-18 · 17:00
Upcoming Session Details Topic: Enterprise Rady Databases- SWL Server 2025 Date: December 18th, 2025 Time: 9:00AM – 10:00AM PT Speakers: Raj Pochiraju (Principal PM, Azure Data / SQL) & Davide Mauri (Principal PM, Azure Data / SQL) Description: Discover how SQL Server 2025 delivers enterprise-ready performance with built-in AI capabilities, advanced security, and developer-focused enhancements to simplify and accelerate modern data solutions. Location: ONLINE |
This Thursday: Join Us for the SQL Community Engineering Connection!
|
|
Enterprise Ready Databases - SQL Server 2025
2025-12-18 · 17:00
Descubre cómo SQL Server 2025 ofrece un rendimiento preparado para la empresa con capacidades de IA integradas, seguridad avanzada y mejoras centradas en los desarrolladores para simplificar y acelerar las soluciones de datos modernas. Topic: "Enterprise Ready Databases - SQL Server 2025" Date: December 18, 2025 Time: 9:00AM – 10:00AM PT Speakers:
Description: Discover how SQL Server 2025 delivers enterprise-ready performance with built-in AI capabilities, advanced security, and developer-focused enhancements to simplify and accelerate modern data solutions. Este evento forma parte de la SQL Community Engineering Connection Series que iremos publicando en esta comunidad. |
Enterprise Ready Databases - SQL Server 2025
|
|
SQL Community: Enterprise Ready Databases- SWL Server 2025
2025-12-18 · 17:00
Discover how SQL Server 2025 delivers enterprise-ready performance with built-in AI capabilities, advanced security, and developer-focused enhancements to simplify and accelerate modern data solutions. Speakers: Raj Pochiraju (Principal PM, Azure Data / SQL) Davide Mauri (Principal PM, Azure Data / SQL) |
SQL Community: Enterprise Ready Databases- SWL Server 2025
|
|
CNM x TMC Christmas Party
2025-12-10 · 17:00
We've teamed up with The Media Collective and sponsors to hold an exclusive Media Tech Christmas Party in West London right next to a Zone 2 tube station. RSVPs will NOT be opening here as this one is invitation only if you've previously attended Cloud Native Media or are a Media/Tech professional please contact Paul Markham or David O'Dwyer for your personal invitation. |
CNM x TMC Christmas Party
|
|
PyData Global 2025
2025-12-09 · 10:30
PyData Global 2025, our biggest online event of the year—and we’d love for you to join us from December 9–11! 🔗 Browse the full program: https://pydata.org/global2025/schedule This year’s conference brings together thousands of data scientists, analysts, engineers, and open-source enthusiasts for three days of learning and connection across 80+ talks, tutorials, and keynotes. Wherever you are in the world, PyData Global is your opportunity to dive deep into the latest tools, techniques, and ideas shaping data science today. 🌟 What to Expect This Year 🎥 Live From PyData Boston We’re excited to host a dedicated livestream track featuring select sessions broadcast directly from PyData Boston at the Microsoft NERD Center 🌟 PyData Global Keynotes
🌟 Did you know? ⏪ Automatic Rewatch Is Included Busy during the conference days? No worries. Your registration gives you instant access to replay any session as soon as it ends—so you won’t miss a single talk, tutorial, or keynote. Register your ticket today! https://pydata.org/global2025/tickets |
PyData Global 2025
|
|
PyData Global 2025
2025-12-09 · 10:30
PyData Global 2025, our biggest online event of the year—and we’d love for you to join us from December 9–11! 🔗 Browse the full program: https://pydata.org/global2025/schedule This year’s conference brings together thousands of data scientists, analysts, engineers, and open-source enthusiasts for three days of learning and connection across 80+ talks, tutorials, and keynotes. Wherever you are in the world, PyData Global is your opportunity to dive deep into the latest tools, techniques, and ideas shaping data science today. 🌟 What to Expect This Year 🎥 Live From PyData Boston We’re excited to host a dedicated livestream track featuring select sessions broadcast directly from PyData Boston at the Microsoft NERD Center 🌟 PyData Global Keynotes
🌟 Did you know? ⏪ Automatic Rewatch Is Included Busy during the conference days? No worries. Your registration gives you instant access to replay any session as soon as it ends—so you won’t miss a single talk, tutorial, or keynote. Register your ticket today! https://pydata.org/global2025/tickets |
PyData Global 2025
|