talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (152 results)

See all 152 →

Activities & events

Title & Speakers Event

Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Time and Location

Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom!

Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training

Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes.

About the Author

Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties.

Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring

Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery.

About the Speaker

Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring.

GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer

Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results.

Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively.

We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions.

About the Speaker

Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/

HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild​ ​

Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction.

About the Speaker

Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI.

Jan 15 - Best of NeurIPS (Day 2)

Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Time and Location

Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom!

Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training

Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes.

About the Author

Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties.

Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring

Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery.

About the Speaker

Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring.

GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer

Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results.

Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively.

We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions.

About the Speaker

Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/

HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild​ ​

Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction.

About the Speaker

Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI.

Jan 15 - Best of NeurIPS (Day 2)

Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Time and Location

Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom!

Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training

Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes.

About the Author

Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties.

Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring

Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery.

About the Speaker

Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring.

GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer

Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results.

Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively.

We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions.

About the Speaker

Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/

HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild​ ​

Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction.

About the Speaker

Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI.

Jan 15 - Best of NeurIPS (Day 2)

Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Time and Location

Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom!

Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training

Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes.

About the Author

Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties.

Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring

Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery.

About the Speaker

Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring.

GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer

Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results.

Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively.

We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions.

About the Speaker

Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/

HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild​ ​

Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction.

About the Speaker

Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI.

Jan 15 - Best of NeurIPS (Day 2)

Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Time and Location

Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom!

Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training

Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes.

About the Author

Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties.

Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring

Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery.

About the Speaker

Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring.

GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer

Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results.

Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively.

We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions.

About the Speaker

Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/

HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild​ ​

Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction.

About the Speaker

Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI.

Jan 15 - Best of NeurIPS (Day 2)

Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Time and Location

Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom!

Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training

Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes.

About the Author

Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties.

Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring

Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery.

About the Speaker

Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring.

GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer

Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results.

Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively.

We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions.

About the Speaker

Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/

HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild​ ​

Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction.

About the Speaker

Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI.

Jan 15 - Best of NeurIPS (Day 2)

Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Time and Location

Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom!

Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training

Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes.

About the Author

Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties.

Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring

Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery.

About the Speaker

Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring.

GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer

Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results.

Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively.

We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions.

About the Speaker

Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/

HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild​ ​

Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction.

About the Speaker

Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI.

Jan 15 - Best of NeurIPS (Day 2)

Welcome to day two of the Best of NeurIPS series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Time and Location

Jan 15, 2026 9:00-11:00 AM Pacific Online. Register for the Zoom!

Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training

Diffusion models have achieved impressive results across many generative tasks, yet the mechanisms that prevent memorization and enable generalization remain unclear. In this talk, I will focus on how training dynamics shape the transition from generalization to memorization. Our experiments and theory reveal two key timescales: an early time when high-quality generation emerges and a later one when memorization begins. Notably, the memorization timescale grows linearly with the size of the training set, while the generalization timescale stays constant, creating an increasingly wide window where models generalize well. These results highlight an implicit dynamical regularization that helps diffusion models avoid memorization even in highly overparameterized regimes.

About the Author

Raphaël Urfin is a PhD student at École Normale Supérieure – PSL in Paris, supervised by Giulio Biroli (ENS) and Marc Mézard (Bocconi University). His work focuses on applying ideas and tools of statistical physics to better understand diffusion models and their generalization properties.

Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring

Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery.

About the Speaker

Yuyan Chen is a PhD student in Computer Science at McGill University and Mila - Quebec AI Institute, supervised by Prof. David Rolnick. My research focuses on machine learning for biodiversity monitoring.

GuideFlow3D: Optimization-Guided Rectified Flow For Appearance Transfer

Transferring appearance to 3D assets using different representations of the appearance object - such as images or text - has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results.

Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset, outperforming baselines both qualitatively and quantitatively.

We also show that traditional metrics are not suitable for evaluating the task due to their inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment, as further confirmed by our user study. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions.

About the Speaker

Sayan Deb Sarkar is a 2nd-year PhD student at Stanford University in the Gradient Spaces Group, advised by Prof. Iro Armeni, part of the Stanford Vision Lab (SVL). His research interests are on multimodal 3D scene understanding and interactive editing. Past summer, he interned with the Microsoft Spatial AI Lab, hosted by Prof. Marc Pollefeys, working on efficient video understanding in spatial context. Before starting PhD, he was a CS master student at ETH Zürich, in the Computer Vision and Geometry Group (CVG), working on aligning real-world 3D environments from multi-modal data. In the past, he has been a Research Intern at Qualcomm XR labs, Computer Vision Engineer at Mercedes Benz R & D and Research Engineer at ICG, TU Graz. Website: https://sayands.github.io/

HouseLayout3D: A Benchmark and Baseline Method for 3D Layout Estimation in the Wild​ ​

Current 3D layout estimation models are primarily trained on synthetic datasets containing simple single room or single floor environments. As a consequence, they cannot natively handle large multi floor buildings and require scenes to be split into individual floors before processing, which removes global spatial context that is essential for reasoning about structures such as staircases that connect multiple levels. In this work, we introduce HouseLayout3D, a real world benchmark designed to support progress toward full building scale layout estimation, including multiple floors and architecturally intricate spaces. We also present MultiFloor3D, a simple training free baseline that leverages recent scene understanding methods and already outperforms existing 3D layout estimation models on both our benchmark and prior datasets, highlighting the need for further research in this direction.

About the Speaker

Valentin Bieri is a Machine Learning Engineer and Researcher specializing in the intersection of 3D Computer Vision and Natural Language Processing. Building on his applied research in SLAM and Vision-Language Models at ETH Zurich, he now develops AI agents for manufacturing at EthonAI.

Jan 15 - Best of NeurIPS (Day 2)

​The first Monday of every month in NYC is CryptoMondays Wall Street. December 1st, we're excited to host Special Guests CryptoMondays Wall Street — Year in Review Presented by Vault12, Power Women, NyXXt, Valmar Capital & Solidus Labs

LOOKING FORWARD LOOKING BACK

📅 December 1 \| 6:00–8:30 PM 📍 Sojourner Gallery, 178 Bleecker St, 2nd Floor, NYC Current Exihbition: Glistening Water artist Alena Ahrens invites viewers into a contemplative encounter with color, perception, and the quiet movement of time. Her paintings unfold like slow breaths—each layer of pigment, gradient, and gesture forming a rhythm that is less seen than felt. Drawing from both Color Field painting and the ritualistic processes of material transformation, Ahrens turns color into a vessel of reflection, a surface where memory and awareness meet.

We’re closing out the year with a special edition session: Looking Forward, Looking Back — the Web3 market recap + trend forecast that every founder, investor, and operator needs before the new year.

Special Guests

Wasim Ahmad: Co-Founder of Vault12

Chen Arad Co-Founder of Solidus Labs

Hosted By

Joe Cox & Sarah Pustilnik CryptoMondays Wall St

In Collaboration With Sojourner Gallery, NYC

We will follow our regular format: 6 - 7 PM: Hang, mingle, look at art and network 7 - 7:45 PM: Insights from CMWS Guests 7:45 - 8:30 PM: Network and chill ​ RSVP to lu.ma link for access

​Community Partner: Vault12 ​“If you don’t worry about crypto inheritance, nobody else will.” ​Vault12 safeguards your legacy—protecting and inheriting all your crypto assets, from Bitcoin to NFTs and digital collectibles. 🎁 Exclusive Gift: One year FREE subscription for Crypto Inheritance! Use code CMNYC25 at checkout. m​Learn more at Vault12.com

​Special Guest: Chen Arad Co-Founder of Solidus Labs Chen is passionate about turning complex issues into compelling stories that drive action and value. At Solidus, our story is about bridging traditional finance and the new digital economy by combating manipulation and helping digital asset businesses operate with less risk, increased transparency, and stronger credibility. We're proud to play a role in the even bigger story of blockchain and digital assets as they transform finance, make capital markets more accessible, inject liquidity into formerly illiquid assets, and introduce new levels of efficiency. ​ Featured Guest: Wasim Ahmad: Co-Founder of Vault12. Serial entrepreneur and co-founder with over 20 years of experience in innovative startups, currently focused on products at Vault12. Creative operator with 20+ years of executive-level startup experience scaling enterprise B2B2C companies and now launching a crypto-security company after running a successful private and public offering. Previously raised over $100M+ in funding at eight startups with five successful exits. Always looking at innovations such as AI, Blockchain, Crypto, and ZK for the future. Currently spearheading Product, Marketing, and Business Development at Vault12, ​Advising funds and open to board positions, co-founder, and CEO opportunities.

CryptoMondays Wall Street ​Host: Joe Cox ​​Head of Business Development at Valmar Capital Joe Cox leads business development at Valmar Capital, a digital assets multi-manager, multi-strategy platform combining emerging crypto talent with institutional-grade infrastructure and cutting-edge technology. As the host of CryptoMondays Wall Street, Joe brings thought leadership, community, and strategy to every conversation.

Special Guest Host: Sarah Pustilnik Sarah is a seasoned Venture Capitalist and Entrepreneur with an unyielding passion for media, tech, energy, innovation, and new ventures. Have experience in both the business and creative side of industry. Strong history and experience with some of the biggest players in Media, Wall Street, Tech, Private Equity, Families, Defense, Innovation, Healthcare, and Energy. Principal in a SFO and it’s financial and foundational efforts. A disrupter, traditionalist, as well as thought leader. Pride myself on relationships with co-workers, colleagues, and those in the community.

​About the Series ​CryptoMondays Wall Street bridges institutional finance and Web3. Since 2018, CryptoMondays has grown to 150,000+ members across 68 + cities, hosting global IRL meetups that connect investors, founders, and innovators. Thank you to our teams at NyXXt.co

Curating connection, culture & influence for the next generation of family office leaders.

At the crossroads of media, healthcare, energy, AI, fintech, blockchain, technology, and emerging markets, we bring together capital, community, and culture to shape the next era of innovation. THC Lawyers Focused, Experienced and Trusted Legal Counsel. At THC Lawyers, we are dedicated to guiding our clients toward achieving their objectives efficiently and effectively, leveraging our extensive expertise in commercial and financial litigation and capital markets, along with intellectual property matters. Our clients rely on us for cost-efficient resolutions to their legal challenges, trusting our proven track record of delivering strategic solutions. strategic solutions Remsen Partners connects visionary founders and operators with investors who want to build the extraordinary - focusing on luxury. ​Another exclusive, production powered by NFT VIP — connecting communities, brands, and ideas across every CryptoMondays stage and beyond. MedStartr is all about driving innovation further faster in healthcare. The goal of the myriad conversations we enable is to not only talk about innovation but to do things, to get involved, to create the future of medicine together. To this end, we created MedStartr.com, a crowdfunding platform designed for healthcare startups. It isn’t just for funding but for bringing patients, care team members, partners, pilots, and investors to the table for early-stage companies, non-profits, and people with innovative ideas.

​For partnership or brand integration inquiries, contact [email protected]

​By RSVPing, you agree to be added to our future correspondence and confirm your email is active and in good standing.

CryptoMondays Wall St YEAR IN REVIEW Vault12 PowerWomen NyXXt Valmar SolidusLabs

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Topic: Building an AI-Ready Data Stack: Integrating Lakehouse and Catalog for Unified Intelligence

Description: In todays AI-driven world, data fragmentation is the biggest barrier to building intelligent systems. Join VeloDB and Datastrato for an in-depth session on how modern data architectures are evolving to support unified, AI-ready analytics.

In this session, Rayner Chen (VP of Engineering, VeloDB) will explore how catalogs break down data silos and enable truly unified analytics across lakehouse environmentssharing best practices for integrating structured and streaming data into a single, high-performance stack. Then, Jerry Shao (Co-founder & CTO, Datastrato) will dive into the role of metadata catalogs as context, showing how rich metadata and governance frameworks can power and control the next generation of AI applications.

Whether youre building large-scale analytics platforms or preparing your organization for AI, this session will give you the architectural insights and practical frameworks to build a cohesive, AI-ready data stack.

  • Breaking Data Silos with Catalogs: How to Build Unified Analytics (Speaker: Rayner Chen, VP of Engineering@VeloDB)
  • Catalogs as Context: Using metadata to power and govern the next wave of AI development (Speaker: Jerry Shao, Co-founder and CTO@Datastrato)

Speak with Our Knowledgeable Advisor

Access Our Complimentary Career Guide

Transform Your Career with Us in Just 14 Weeks

Discover More About WeCloudData


ABOUT US

WeCloudData is the leading accredited education institute in North America that focuses on Data Science, Data Engineering, DevOps, Artificial Intelligence, and Business Intelligence.

Developed by industry experts, and hiring managers, and highly recognized by our hiring partners,WeCloudDatas learning paths have helped many students make successful transitions into data and DevOps roles that fit their backgrounds and passions.WeCloudData provides a different and more practical teaching methodology, so that students not only learn the technical skills but also acquire the soft skills that will make them stand out in a work environment.

WeCloudData has also partnered with many big companies to help them adopt the latest tech in Data, AI, and DevOps. Visit our website for more information: https://weclouddata.com

Building an AI-Ready Data Stack: Integrating Lakehouse & Catalog
Richie – host @ DataCamp , Mo Chen – Data & Analytics Manager @ NatWest Group

The role of data analysts is evolving, not disappearing. With generative AI transforming the industry, many wonder if their analytical skills will soon become obsolete. But how is the relationship between human expertise and AI tools really changing? While AI excels at coding, debugging, and automating repetitive tasks, it struggles with understanding complex business problems and domain-specific challenges. What skills should today's data professionals focus on to remain relevant? How can you leverage AI as a partner rather than viewing it as a replacement? The balance between technical expertise and business acumen has never been more critical in navigating this changing landscape. Mo Chen is a Data & Analytics Manager with over seven years of experience in financial and banking data. Currently at NatWest Group, Mo leads initiatives that enhance data management, automate reporting, and improve decision-making across the organization. After earning an MSc in Finance & Economics from the University of St Andrews, Mo launched a career in risk and credit portfolio management before transitioning into analytics. Blending economics, finance, and data engineering, Mo is skilled at turning large-scale financial data into actionable insight that supports efficiency and strategic planning. Beyond corporate life, Mo has become a passionate educator and community-builder. On YouTube, Mo hosts a fast-growing channel (185K+ subscribers, with millions of views) where he breaks down complex analytics concepts into bite-sized, actionable lessons. In the episode, Richie and Mo explore the evolving role of data analysts, the impact of AI on coding and debugging, the importance of domain knowledge for career switchers, effective communication strategies in data analysis, and much more. Links Mentioned in the Show: Mo’s Website - Build a Data Portfolio WebsiteMo’s YouTube ChannelConnect with MoGet Certified as a Data AnalystRelated Episode: Career Skills for Data Professionals with Wes Kao, Co-Founder of MavenRewatch RADAR AI  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

AI/ML Analytics Data Engineering Data Management GenAI
DataFramed

Date and Time

Aug 28, 2025 at 10 AM Pacific

Location

Virtual - Register for the Zoom

Exploiting Vulnerabilities In CV Models Through Adversarial Attacks

As AI and computer vision models are leveraged more broadly in society, we should be better prepared for adversarial attacks by bad actors. In this talk, we'll cover some of the common methods for performing adversarial attacks on CV models. Adversarial attacks are deliberate attempts to deceive neural networks into generating incorrect predictions by making subtle alterations to the input data.

About the Speaker

Elisa Chen is a data scientist at Meta on the Ads AI Infra team with 5+ years of experience in the industry.

EffiDec3D: An Optimized Decoder for High-Performance and Efficient 3D Medical Image Segmentation

Recent 3D deep networks such as SwinUNETR, SwinUNETRv2, and 3D UX-Net have shown promising performance by leveraging self-attention and large-kernel convolutions to capture the volumetric context. However, their substantial computational requirements limit their use in real-time and resource-constrained environments.

In this paper, we propose EffiDec3D, an optimized 3D decoder that employs a channel reduction strategy across all decoder stages and removes the high-resolution layers when their contribution to segmentation quality is minimal. Our optimized EffiDec3D decoder achieves a 96.4% reduction in #Params and a 93.0% reduction in #FLOPs compared to the decoder of original 3D UX-Net. Our extensive experiments on 12 different medical imaging tasks confirm that EffiDec3D not only significantly reduces the computational demands, but also maintains a performance level comparable to original models, thus establishing a new standard for efficient 3D medical image segmentation.

About the Speaker

Md Mostafijur Rahman is a final-year Ph.D. candidate in Electrical and Computer Engineering at The University of Texas at Austin, advised by Dr. Radu Marculescu, where he builds efficient AI methods for biomedical imaging tasks such as segmentation, synthesis, and diagnosis. By uniting efficient architectures with data-efficient training, his work delivers robust and efficient clinically deployable imaging solutions.

What Makes a Good AV Dataset? Lessons from the Front Lines of Sensor Calibration and Projection

Getting autonomous vehicle data ready for real use, whether for training, simulation, or evaluation, isn’t just about collecting LIDAR and camera frames. It’s about making sure every point lands where it should, in the right frame, at the right time.

In this talk, we’ll break down what it actually takes to go from raw logs to a clean, usable AV dataset. We’ll walk through the practical process of validating transformations, aligning coordinate systems, checking intrinsics and extrinsics, and making sure your projected points actually show up on camera images. Along the way, we’ll share a checklist of common failure points and hard-won debugging tips.

Finally, we’ll show how doing this right unlocks downstream tools like Omniverse Nurec and Cosmos—enabling powerful workflows like digital reconstruction, simulation, and large-scale synthetic data generation

About the Speaker

Daniel Gural is a seasoned Machine Learning Engineer at Voxel51 with a strong passion for empowering Data Scientists and ML Engineers to unlock the full potential of their data.

Clustering in Computer Vision: From Theory to Applications

In today’s AI landscape, these techniques are crucial. Clustering methods help organize unstructured data into meaningful groups, aiding knowledge discovery, feature analysis, and retrieval-augmented generation. From k-means to DBSCAN and hierarchical approaches like FINCH, selecting the right method is key: including balancing scalability, managing noise sensitivity, and fitting computational demands. This presentation provides an in-depth exploration of the current state-of-the-art of clustering techniques with a strong focus on their applications within computer vision.

About the Speaker

Constantin Seibold leads research group on the development of machine learning methods in the diagnostic and interventional radiology department at the university hospital Heidelberg. His research aims to improve the daily life of both doctors and patients.

Aug 28 - AI, ML and Computer Vision Meetup

Date and Time

Aug 28, 2025 at 10 AM Pacific

Location

Virtual - Register for the Zoom

Exploiting Vulnerabilities In CV Models Through Adversarial Attacks

As AI and computer vision models are leveraged more broadly in society, we should be better prepared for adversarial attacks by bad actors. In this talk, we'll cover some of the common methods for performing adversarial attacks on CV models. Adversarial attacks are deliberate attempts to deceive neural networks into generating incorrect predictions by making subtle alterations to the input data.

About the Speaker

Elisa Chen is a data scientist at Meta on the Ads AI Infra team with 5+ years of experience in the industry.

EffiDec3D: An Optimized Decoder for High-Performance and Efficient 3D Medical Image Segmentation

Recent 3D deep networks such as SwinUNETR, SwinUNETRv2, and 3D UX-Net have shown promising performance by leveraging self-attention and large-kernel convolutions to capture the volumetric context. However, their substantial computational requirements limit their use in real-time and resource-constrained environments.

In this paper, we propose EffiDec3D, an optimized 3D decoder that employs a channel reduction strategy across all decoder stages and removes the high-resolution layers when their contribution to segmentation quality is minimal. Our optimized EffiDec3D decoder achieves a 96.4% reduction in #Params and a 93.0% reduction in #FLOPs compared to the decoder of original 3D UX-Net. Our extensive experiments on 12 different medical imaging tasks confirm that EffiDec3D not only significantly reduces the computational demands, but also maintains a performance level comparable to original models, thus establishing a new standard for efficient 3D medical image segmentation.

About the Speaker

Md Mostafijur Rahman is a final-year Ph.D. candidate in Electrical and Computer Engineering at The University of Texas at Austin, advised by Dr. Radu Marculescu, where he builds efficient AI methods for biomedical imaging tasks such as segmentation, synthesis, and diagnosis. By uniting efficient architectures with data-efficient training, his work delivers robust and efficient clinically deployable imaging solutions.

What Makes a Good AV Dataset? Lessons from the Front Lines of Sensor Calibration and Projection

Getting autonomous vehicle data ready for real use, whether for training, simulation, or evaluation, isn’t just about collecting LIDAR and camera frames. It’s about making sure every point lands where it should, in the right frame, at the right time.

In this talk, we’ll break down what it actually takes to go from raw logs to a clean, usable AV dataset. We’ll walk through the practical process of validating transformations, aligning coordinate systems, checking intrinsics and extrinsics, and making sure your projected points actually show up on camera images. Along the way, we’ll share a checklist of common failure points and hard-won debugging tips.

Finally, we’ll show how doing this right unlocks downstream tools like Omniverse Nurec and Cosmos—enabling powerful workflows like digital reconstruction, simulation, and large-scale synthetic data generation

About the Speaker

Daniel Gural is a seasoned Machine Learning Engineer at Voxel51 with a strong passion for empowering Data Scientists and ML Engineers to unlock the full potential of their data.

Clustering in Computer Vision: From Theory to Applications

In today’s AI landscape, these techniques are crucial. Clustering methods help organize unstructured data into meaningful groups, aiding knowledge discovery, feature analysis, and retrieval-augmented generation. From k-means to DBSCAN and hierarchical approaches like FINCH, selecting the right method is key: including balancing scalability, managing noise sensitivity, and fitting computational demands. This presentation provides an in-depth exploration of the current state-of-the-art of clustering techniques with a strong focus on their applications within computer vision.

About the Speaker

Constantin Seibold leads research group on the development of machine learning methods in the diagnostic and interventional radiology department at the university hospital Heidelberg. His research aims to improve the daily life of both doctors and patients.

Aug 28 - AI, ML and Computer Vision Meetup