talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (3 results)

Activities & events

Title & Speakers Event

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 21, 2025 9 AM Pacific Online. Register for the Zoom!

GECO: Geometrically Consistent Embedding with Lightspeed Inference

Recent advances in feature learning have shown that self-supervised vision foundation models can capture semantic correspondences but often lack awareness of underlying 3D geometry. GECO addresses this gap by producing geometrically coherent features that semantically distinguish parts based on geometry (e.g., left/right eyes, front/back legs). We propose a training framework based on optimal transport, enabling supervision beyond keypoints, even under occlusions and disocclusions. With a lightweight architecture, GECO runs at 30 fps, 98.2% faster than prior methods, while achieving state-of-the-art performance on PFPascal, APK, and CUB, improving PCK by 6.0%, 6.2%, and 4.1%, respectively. Finally, we show that PCK alone is insufficient to capture geometric quality and introduce new metrics and insights for more geometry-aware feature learning

About the Speaker

Regine Hartwig is a PHD Graduate Student at the Technical University of Munich

Proactive Comorbidity Prediction in HIV: Towards Fair and Trustworthy Care

HIV is a chronic infection that weakens the immune system and exposes patients to a high burden of comorbidities. While antiretroviral therapy has improved life expectancy, comorbidities remain a major challenge, and traditional screening protocols often fail to capture subtle risk patterns early enough. To address this, we develop a novel method trained on lab tests and demographic data from 2,200 patients in SE London. The method integrates feature interaction modeling, attention mechanisms, residual fusion and label-specific attention heads, outperforming TabNet, MLPs and classical machine learning models.

Our experiments show that incorporating demographic information improves predictive performance, though demographic recoverability analyses reveal that age and gender can still be inferred from lab data alone, raising fairness concerns. Finally, robustness checks confirm stable feature importance across cross-validation folds, reinforcing the trustworthiness of our approach.

About the Speaker

Dimitrios Kollias is an Associate Professor in Multimodal AI at Queen Mary University of London, specializing in machine/deep learning, trustworthy AI, computer vision, medical imaging & healthcare, behavior analysis, HMI. I have published 80+ papers (h-index 39; 6100+ citations) in top venues (e.g., CVPR, ICCV, ECCV, AAAI, IJCV, ECAI), invented a patent in behavior analysis (Huawei) and my research is widely adopted by academia and industry. I also serve as AI consultant and advisor to global companies, and have played leading roles in major international AI workshops and competitions.

Toward Trustworthy Embodied Agents: From Individuals to Teams

Modern intelligent embodied agents, such as service robots and autonomous vehicles, interact frequently with humans in dynamic, uncertain environments. They may also collaborate with each other as a team through effective communication to enhance task success, safety, and efficiency. These brings a few significant challenges. First, building reliable agents that safely navigate multi-agent scenarios requires scalable and generalizable prediction of surrounding agents’ behaviors and robust decision making under environmental uncertainty in out-of-distribution (OOD) scenarios. Second, effective cooperation between agents requires efficient communication and information fusion strategies and reliable task planning for complex long-horizon tasks.

In this talk, I will introduce a series of our recent work that addresses these challenges to enable safe and trustworthy embodied agents and their application to autonomous driving and service robots. Specifically, I will first demonstrate principled uncertainty quantification techniques and how they enable generalizable prediction and planning in out-of-distribution scenarios. Then, I will talk about effective approaches to enable efficient multi-agent communication and cooperation in centralized and decentralized settings.

About the Speaker

Dr. Jiachen Li is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and a cooperating faculty in the Department of Computer Science and Engineering (CSE) at the University of California, Riverside. He is the Director of the Trustworthy Autonomous Systems Laboratory and is affiliated with the Riverside Artificial Intelligence Research Institute (RAISE), the Center for Robotics and Intelligent Systems (CRIS), and the Center for Environmental Research and Technology (CE-CERT).

DRaM-LHM: A Quaternion Framework for Iterative Camera Pose Estimation

We explore a quaternion adjugate matrix-based representation for rotational motion in the Perspective-n-Point (PnP) problem. Leveraging quadratic quaternion terms within a Determinant Ratio Matrix (DRaM) estimation framework, we extend its application to perspective scenarios, providing a robust and efficient initialization for iterative PnP pose estimation. Notably, by solving the orthographic projection least-squares problem, DRaM provides a reliable initialization that enhances the accuracy and stability of iterative PnP solvers. Experiments on synthetic and real data demonstrate its efficiency, accuracy, and robustness, particularly under high noise conditions. Furthermore, our nonminimal formulation ensures numerical stability, making it effective for real-world applications.

About the Speaker

Chen Lin was a Research Fellow at the Simons Foundation, where she specialized in 3D computer vision and visual(-inertial) SLAM. Her research spans from classical multiview geometry to learning-based pose estimation and scene understanding. Her ICCV 2025 paper introduces a new framework for rotation and pose estimation built on advanced algebraic paradigms.

Nov 21 - Best of ICCV (Day 3)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 19, 2025 9 AM Pacific Online. Register for the Zoom!

AnimalClue: Recognizing Animals by their Traces

Wildlife observation plays an important role in biodiversity conservation, necessitating robust methodologies for monitoring wildlife populations and interspecies interactions. Recent advances in computer vision have significantly contributed to automating fundamental wildlife observation tasks, such as animal detection and species identification. However, accurately identifying species from indirect evidence like footprints and feces remains relatively underexplored, despite its importance in contributing to wildlife monitoring.

To bridge this gap, we introduce AnimalClue, the first large-scale dataset for species identification from images of indirect evidence. Our dataset consists of 159,605 bounding boxes encompassing five categories of indirect clues: footprints, feces, eggs, bones, and feathers. It covers 968 species, 200 families, and 65 orders. Each image is annotated with species-level labels, bounding boxes or segmentation masks, and fine-grained trait information, including activity patterns and habitat preferences. Unlike existing datasets primarily focused on direct visual features (e.g., animal appearances), AnimalClue presents unique challenges for classification, detection, and instance segmentation tasks due to the need for recognizing more detailed and subtle visual features. In our experiments, we extensively evaluate representative vision models and identify key challenges in animal identification from their traces.

About the Speaker

Risa Shinoda received her M.S. and Ph.D. in Agricultural Science from Kyoto University in 2022 and 2025. Since April 2025, she has been serving as a Specially Appointed Assistant Professor at the Graduate School of Information Science and Technology, the University of Osaka. She is engaged in research on the application of image recognition to plants and animals, as well as vision-language models.

LOTS of Fashion! Multi-Conditioning for Image Generation via Sketch-Text Pairing

Fashion design is a complex creative process that blends visual and textual expressions. Designers convey ideas through sketches, which define spatial structure and design elements, and textual descriptions, capturing material, texture, and stylistic details. In this paper, we present LOcalized Text and Sketch for fashion image generation (LOTS), an approach for compositional sketch-text based generation of complete fashion outlooks. LOTS leverages a global description with paired localized sketch + text information for conditioning and introduces a novel step-based merging strategy for diffusion adaptation.

First, a Modularized Pair-Centric representation encodes sketches and text into a shared latent space while preserving independent localized features; then, a Diffusion Pair Guidance phase integrates both local and global conditioning via attention-based guidance within the diffusion model’s multi-step denoising process. To validate our method, we build on Fashionpedia to release Sketchy, the first fashion dataset where multiple text-sketch pairs are provided per image. Quantitative results show LOTS achieves state-of-the-art image generation performance on both global and localized metrics, while qualitative examples and a human evaluation study highlight its unprecedented level of design customization.

About the Speaker

Federico Girella is a third-year Ph.D. student at the University of Verona (Italy), supervised by Prof. Marco Cristani, with expected graduation in May 2026. His research involves joint representations in the Image and Language multi-modal domain, working with deep neural networks such as (Large) Vision and Language Models and Text-to-Image Generative Models. His main body of work focuses on Text-to-Image Retrieval and Generation in the Fashion domain.

ProtoMedX: Explainable Multi-Modal Prototype Learning for Bone Health Assessment

Early detection of osteoporosis and osteopenia is critical, yet most AI models for bone health rely solely on imaging and offer little transparency into their decisions. In this talk, I will present ProtoMedX, the first prototype-based framework that combines lumbar spine DEXA scans with patient clinical records to deliver accurate and inherently explainable predictions.

Unlike black-box deep networks, ProtoMedX classifies patients by comparing them to learned case-based prototypes, mirroring how clinicians reason in practice. Our method not only achieves state-of-the-art accuracy on a real NHS dataset of 4,160 patients but also provides clear, interpretable explanations aligned with the upcoming EU AI Act requirements for high-risk medical AI. Beyond bone health, this work illustrates how prototype learning can make multi-modal AI both powerful and transparent, offering a blueprint for other safety-critical domains.

About the Speaker

Alvaro Lopez is a PhD candidate in Explainable AI at Lancaster University and an AI Research Associate at J.P. Morgan in London. His research focuses on prototype-based learning, multi-modal AI, and AI security. He has led projects on medical AI, fraud detection, and adversarial robustness, with applications ranging from healthcare to financial systems.

CLASP: Adaptive Spectral Clustering for Unsupervised Per-Image Segmentation

We introduce CLASP (Clustering via Adaptive Spectral Processing), a lightweight framework for unsupervised image segmentation that operates without any labeled data or fine-tuning. CLASP first extracts per-patch features using a self-supervised ViT encoder (DINO); then, it builds an affinity matrix and applies spectral clustering. To avoid manual tuning, we select the segment count automatically with a eigengap-silhouette search, and we sharpen the boundaries with a fully connected DenseCRF. Despite its simplicity and training-free nature, CLASP attains competitive mIoU and pixel-accuracy on COCO-Stuff and ADE20K, matching recent unsupervised baselines. The zero-training design makes CLASP a strong, easily reproducible baseline for large unannotated corpora—especially common in digital advertising and marketing workflows such as brand-safety screening, creative asset curation, and social-media content moderation.

About the Speaker

Max Curie is a Research Scientist at Integral Ad Science, building fast, lightweight solutions for brand safety, multi-media classification, and recommendation systems. As a former nuclear physicist at Princeton University, he brings rigorous analytical thinking and modeling discipline from his physics background to advance ad tech.

Nov 19 - Best of ICCV (Day 1)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 19, 2025 9 AM Pacific Online. Register for the Zoom!

AnimalClue: Recognizing Animals by their Traces

Wildlife observation plays an important role in biodiversity conservation, necessitating robust methodologies for monitoring wildlife populations and interspecies interactions. Recent advances in computer vision have significantly contributed to automating fundamental wildlife observation tasks, such as animal detection and species identification. However, accurately identifying species from indirect evidence like footprints and feces remains relatively underexplored, despite its importance in contributing to wildlife monitoring.

To bridge this gap, we introduce AnimalClue, the first large-scale dataset for species identification from images of indirect evidence. Our dataset consists of 159,605 bounding boxes encompassing five categories of indirect clues: footprints, feces, eggs, bones, and feathers. It covers 968 species, 200 families, and 65 orders. Each image is annotated with species-level labels, bounding boxes or segmentation masks, and fine-grained trait information, including activity patterns and habitat preferences. Unlike existing datasets primarily focused on direct visual features (e.g., animal appearances), AnimalClue presents unique challenges for classification, detection, and instance segmentation tasks due to the need for recognizing more detailed and subtle visual features. In our experiments, we extensively evaluate representative vision models and identify key challenges in animal identification from their traces.

About the Speaker

Risa Shinoda received her M.S. and Ph.D. in Agricultural Science from Kyoto University in 2022 and 2025. Since April 2025, she has been serving as a Specially Appointed Assistant Professor at the Graduate School of Information Science and Technology, the University of Osaka. She is engaged in research on the application of image recognition to plants and animals, as well as vision-language models.

LOTS of Fashion! Multi-Conditioning for Image Generation via Sketch-Text Pairing

Fashion design is a complex creative process that blends visual and textual expressions. Designers convey ideas through sketches, which define spatial structure and design elements, and textual descriptions, capturing material, texture, and stylistic details. In this paper, we present LOcalized Text and Sketch for fashion image generation (LOTS), an approach for compositional sketch-text based generation of complete fashion outlooks. LOTS leverages a global description with paired localized sketch + text information for conditioning and introduces a novel step-based merging strategy for diffusion adaptation.

First, a Modularized Pair-Centric representation encodes sketches and text into a shared latent space while preserving independent localized features; then, a Diffusion Pair Guidance phase integrates both local and global conditioning via attention-based guidance within the diffusion model’s multi-step denoising process. To validate our method, we build on Fashionpedia to release Sketchy, the first fashion dataset where multiple text-sketch pairs are provided per image. Quantitative results show LOTS achieves state-of-the-art image generation performance on both global and localized metrics, while qualitative examples and a human evaluation study highlight its unprecedented level of design customization.

About the Speaker

Federico Girella is a third-year Ph.D. student at the University of Verona (Italy), supervised by Prof. Marco Cristani, with expected graduation in May 2026. His research involves joint representations in the Image and Language multi-modal domain, working with deep neural networks such as (Large) Vision and Language Models and Text-to-Image Generative Models. His main body of work focuses on Text-to-Image Retrieval and Generation in the Fashion domain.

ProtoMedX: Explainable Multi-Modal Prototype Learning for Bone Health Assessment

Early detection of osteoporosis and osteopenia is critical, yet most AI models for bone health rely solely on imaging and offer little transparency into their decisions. In this talk, I will present ProtoMedX, the first prototype-based framework that combines lumbar spine DEXA scans with patient clinical records to deliver accurate and inherently explainable predictions.

Unlike black-box deep networks, ProtoMedX classifies patients by comparing them to learned case-based prototypes, mirroring how clinicians reason in practice. Our method not only achieves state-of-the-art accuracy on a real NHS dataset of 4,160 patients but also provides clear, interpretable explanations aligned with the upcoming EU AI Act requirements for high-risk medical AI. Beyond bone health, this work illustrates how prototype learning can make multi-modal AI both powerful and transparent, offering a blueprint for other safety-critical domains.

About the Speaker

Alvaro Lopez is a PhD candidate in Explainable AI at Lancaster University and an AI Research Associate at J.P. Morgan in London. His research focuses on prototype-based learning, multi-modal AI, and AI security. He has led projects on medical AI, fraud detection, and adversarial robustness, with applications ranging from healthcare to financial systems.

CLASP: Adaptive Spectral Clustering for Unsupervised Per-Image Segmentation

We introduce CLASP (Clustering via Adaptive Spectral Processing), a lightweight framework for unsupervised image segmentation that operates without any labeled data or fine-tuning. CLASP first extracts per-patch features using a self-supervised ViT encoder (DINO); then, it builds an affinity matrix and applies spectral clustering. To avoid manual tuning, we select the segment count automatically with a eigengap-silhouette search, and we sharpen the boundaries with a fully connected DenseCRF. Despite its simplicity and training-free nature, CLASP attains competitive mIoU and pixel-accuracy on COCO-Stuff and ADE20K, matching recent unsupervised baselines. The zero-training design makes CLASP a strong, easily reproducible baseline for large unannotated corpora—especially common in digital advertising and marketing workflows such as brand-safety screening, creative asset curation, and social-media content moderation.

About the Speaker

Max Curie is a Research Scientist at Integral Ad Science, building fast, lightweight solutions for brand safety, multi-media classification, and recommendation systems. As a former nuclear physicist at Princeton University, he brings rigorous analytical thinking and modeling discipline from his physics background to advance ad tech.

Nov 19 - Best of ICCV (Day 1)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 19, 2025 9 AM Pacific Online. Register for the Zoom!

AnimalClue: Recognizing Animals by their Traces

Wildlife observation plays an important role in biodiversity conservation, necessitating robust methodologies for monitoring wildlife populations and interspecies interactions. Recent advances in computer vision have significantly contributed to automating fundamental wildlife observation tasks, such as animal detection and species identification. However, accurately identifying species from indirect evidence like footprints and feces remains relatively underexplored, despite its importance in contributing to wildlife monitoring.

To bridge this gap, we introduce AnimalClue, the first large-scale dataset for species identification from images of indirect evidence. Our dataset consists of 159,605 bounding boxes encompassing five categories of indirect clues: footprints, feces, eggs, bones, and feathers. It covers 968 species, 200 families, and 65 orders. Each image is annotated with species-level labels, bounding boxes or segmentation masks, and fine-grained trait information, including activity patterns and habitat preferences. Unlike existing datasets primarily focused on direct visual features (e.g., animal appearances), AnimalClue presents unique challenges for classification, detection, and instance segmentation tasks due to the need for recognizing more detailed and subtle visual features. In our experiments, we extensively evaluate representative vision models and identify key challenges in animal identification from their traces.

About the Speaker

Risa Shinoda received her M.S. and Ph.D. in Agricultural Science from Kyoto University in 2022 and 2025. Since April 2025, she has been serving as a Specially Appointed Assistant Professor at the Graduate School of Information Science and Technology, the University of Osaka. She is engaged in research on the application of image recognition to plants and animals, as well as vision-language models.

LOTS of Fashion! Multi-Conditioning for Image Generation via Sketch-Text Pairing

Fashion design is a complex creative process that blends visual and textual expressions. Designers convey ideas through sketches, which define spatial structure and design elements, and textual descriptions, capturing material, texture, and stylistic details. In this paper, we present LOcalized Text and Sketch for fashion image generation (LOTS), an approach for compositional sketch-text based generation of complete fashion outlooks. LOTS leverages a global description with paired localized sketch + text information for conditioning and introduces a novel step-based merging strategy for diffusion adaptation.

First, a Modularized Pair-Centric representation encodes sketches and text into a shared latent space while preserving independent localized features; then, a Diffusion Pair Guidance phase integrates both local and global conditioning via attention-based guidance within the diffusion model’s multi-step denoising process. To validate our method, we build on Fashionpedia to release Sketchy, the first fashion dataset where multiple text-sketch pairs are provided per image. Quantitative results show LOTS achieves state-of-the-art image generation performance on both global and localized metrics, while qualitative examples and a human evaluation study highlight its unprecedented level of design customization.

About the Speaker

Federico Girella is a third-year Ph.D. student at the University of Verona (Italy), supervised by Prof. Marco Cristani, with expected graduation in May 2026. His research involves joint representations in the Image and Language multi-modal domain, working with deep neural networks such as (Large) Vision and Language Models and Text-to-Image Generative Models. His main body of work focuses on Text-to-Image Retrieval and Generation in the Fashion domain.

ProtoMedX: Explainable Multi-Modal Prototype Learning for Bone Health Assessment

Early detection of osteoporosis and osteopenia is critical, yet most AI models for bone health rely solely on imaging and offer little transparency into their decisions. In this talk, I will present ProtoMedX, the first prototype-based framework that combines lumbar spine DEXA scans with patient clinical records to deliver accurate and inherently explainable predictions.

Unlike black-box deep networks, ProtoMedX classifies patients by comparing them to learned case-based prototypes, mirroring how clinicians reason in practice. Our method not only achieves state-of-the-art accuracy on a real NHS dataset of 4,160 patients but also provides clear, interpretable explanations aligned with the upcoming EU AI Act requirements for high-risk medical AI. Beyond bone health, this work illustrates how prototype learning can make multi-modal AI both powerful and transparent, offering a blueprint for other safety-critical domains.

About the Speaker

Alvaro Lopez is a PhD candidate in Explainable AI at Lancaster University and an AI Research Associate at J.P. Morgan in London. His research focuses on prototype-based learning, multi-modal AI, and AI security. He has led projects on medical AI, fraud detection, and adversarial robustness, with applications ranging from healthcare to financial systems.

CLASP: Adaptive Spectral Clustering for Unsupervised Per-Image Segmentation

We introduce CLASP (Clustering via Adaptive Spectral Processing), a lightweight framework for unsupervised image segmentation that operates without any labeled data or fine-tuning. CLASP first extracts per-patch features using a self-supervised ViT encoder (DINO); then, it builds an affinity matrix and applies spectral clustering. To avoid manual tuning, we select the segment count automatically with a eigengap-silhouette search, and we sharpen the boundaries with a fully connected DenseCRF. Despite its simplicity and training-free nature, CLASP attains competitive mIoU and pixel-accuracy on COCO-Stuff and ADE20K, matching recent unsupervised baselines. The zero-training design makes CLASP a strong, easily reproducible baseline for large unannotated corpora—especially common in digital advertising and marketing workflows such as brand-safety screening, creative asset curation, and social-media content moderation.

About the Speaker

Max Curie is a Research Scientist at Integral Ad Science, building fast, lightweight solutions for brand safety, multi-media classification, and recommendation systems. As a former nuclear physicist at Princeton University, he brings rigorous analytical thinking and modeling discipline from his physics background to advance ad tech.

Nov 19 - Best of ICCV (Day 1)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 19, 2025 9 AM Pacific Online. Register for the Zoom!

AnimalClue: Recognizing Animals by their Traces

Wildlife observation plays an important role in biodiversity conservation, necessitating robust methodologies for monitoring wildlife populations and interspecies interactions. Recent advances in computer vision have significantly contributed to automating fundamental wildlife observation tasks, such as animal detection and species identification. However, accurately identifying species from indirect evidence like footprints and feces remains relatively underexplored, despite its importance in contributing to wildlife monitoring.

To bridge this gap, we introduce AnimalClue, the first large-scale dataset for species identification from images of indirect evidence. Our dataset consists of 159,605 bounding boxes encompassing five categories of indirect clues: footprints, feces, eggs, bones, and feathers. It covers 968 species, 200 families, and 65 orders. Each image is annotated with species-level labels, bounding boxes or segmentation masks, and fine-grained trait information, including activity patterns and habitat preferences. Unlike existing datasets primarily focused on direct visual features (e.g., animal appearances), AnimalClue presents unique challenges for classification, detection, and instance segmentation tasks due to the need for recognizing more detailed and subtle visual features. In our experiments, we extensively evaluate representative vision models and identify key challenges in animal identification from their traces.

About the Speaker

Risa Shinoda received her M.S. and Ph.D. in Agricultural Science from Kyoto University in 2022 and 2025. Since April 2025, she has been serving as a Specially Appointed Assistant Professor at the Graduate School of Information Science and Technology, the University of Osaka. She is engaged in research on the application of image recognition to plants and animals, as well as vision-language models.

LOTS of Fashion! Multi-Conditioning for Image Generation via Sketch-Text Pairing

Fashion design is a complex creative process that blends visual and textual expressions. Designers convey ideas through sketches, which define spatial structure and design elements, and textual descriptions, capturing material, texture, and stylistic details. In this paper, we present LOcalized Text and Sketch for fashion image generation (LOTS), an approach for compositional sketch-text based generation of complete fashion outlooks. LOTS leverages a global description with paired localized sketch + text information for conditioning and introduces a novel step-based merging strategy for diffusion adaptation.

First, a Modularized Pair-Centric representation encodes sketches and text into a shared latent space while preserving independent localized features; then, a Diffusion Pair Guidance phase integrates both local and global conditioning via attention-based guidance within the diffusion model’s multi-step denoising process. To validate our method, we build on Fashionpedia to release Sketchy, the first fashion dataset where multiple text-sketch pairs are provided per image. Quantitative results show LOTS achieves state-of-the-art image generation performance on both global and localized metrics, while qualitative examples and a human evaluation study highlight its unprecedented level of design customization.

About the Speaker

Federico Girella is a third-year Ph.D. student at the University of Verona (Italy), supervised by Prof. Marco Cristani, with expected graduation in May 2026. His research involves joint representations in the Image and Language multi-modal domain, working with deep neural networks such as (Large) Vision and Language Models and Text-to-Image Generative Models. His main body of work focuses on Text-to-Image Retrieval and Generation in the Fashion domain.

ProtoMedX: Explainable Multi-Modal Prototype Learning for Bone Health Assessment

Early detection of osteoporosis and osteopenia is critical, yet most AI models for bone health rely solely on imaging and offer little transparency into their decisions. In this talk, I will present ProtoMedX, the first prototype-based framework that combines lumbar spine DEXA scans with patient clinical records to deliver accurate and inherently explainable predictions.

Unlike black-box deep networks, ProtoMedX classifies patients by comparing them to learned case-based prototypes, mirroring how clinicians reason in practice. Our method not only achieves state-of-the-art accuracy on a real NHS dataset of 4,160 patients but also provides clear, interpretable explanations aligned with the upcoming EU AI Act requirements for high-risk medical AI. Beyond bone health, this work illustrates how prototype learning can make multi-modal AI both powerful and transparent, offering a blueprint for other safety-critical domains.

About the Speaker

Alvaro Lopez is a PhD candidate in Explainable AI at Lancaster University and an AI Research Associate at J.P. Morgan in London. His research focuses on prototype-based learning, multi-modal AI, and AI security. He has led projects on medical AI, fraud detection, and adversarial robustness, with applications ranging from healthcare to financial systems.

CLASP: Adaptive Spectral Clustering for Unsupervised Per-Image Segmentation

We introduce CLASP (Clustering via Adaptive Spectral Processing), a lightweight framework for unsupervised image segmentation that operates without any labeled data or fine-tuning. CLASP first extracts per-patch features using a self-supervised ViT encoder (DINO); then, it builds an affinity matrix and applies spectral clustering. To avoid manual tuning, we select the segment count automatically with a eigengap-silhouette search, and we sharpen the boundaries with a fully connected DenseCRF. Despite its simplicity and training-free nature, CLASP attains competitive mIoU and pixel-accuracy on COCO-Stuff and ADE20K, matching recent unsupervised baselines. The zero-training design makes CLASP a strong, easily reproducible baseline for large unannotated corpora—especially common in digital advertising and marketing workflows such as brand-safety screening, creative asset curation, and social-media content moderation.

About the Speaker

Max Curie is a Research Scientist at Integral Ad Science, building fast, lightweight solutions for brand safety, multi-media classification, and recommendation systems. As a former nuclear physicist at Princeton University, he brings rigorous analytical thinking and modeling discipline from his physics background to advance ad tech.

Nov 19 - Best of ICCV (Day 1)

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 19, 2025 9 AM Pacific Online. Register for the Zoom!

AnimalClue: Recognizing Animals by their Traces

Wildlife observation plays an important role in biodiversity conservation, necessitating robust methodologies for monitoring wildlife populations and interspecies interactions. Recent advances in computer vision have significantly contributed to automating fundamental wildlife observation tasks, such as animal detection and species identification. However, accurately identifying species from indirect evidence like footprints and feces remains relatively underexplored, despite its importance in contributing to wildlife monitoring.

To bridge this gap, we introduce AnimalClue, the first large-scale dataset for species identification from images of indirect evidence. Our dataset consists of 159,605 bounding boxes encompassing five categories of indirect clues: footprints, feces, eggs, bones, and feathers. It covers 968 species, 200 families, and 65 orders. Each image is annotated with species-level labels, bounding boxes or segmentation masks, and fine-grained trait information, including activity patterns and habitat preferences. Unlike existing datasets primarily focused on direct visual features (e.g., animal appearances), AnimalClue presents unique challenges for classification, detection, and instance segmentation tasks due to the need for recognizing more detailed and subtle visual features. In our experiments, we extensively evaluate representative vision models and identify key challenges in animal identification from their traces.

About the Speaker

Risa Shinoda received her M.S. and Ph.D. in Agricultural Science from Kyoto University in 2022 and 2025. Since April 2025, she has been serving as a Specially Appointed Assistant Professor at the Graduate School of Information Science and Technology, the University of Osaka. She is engaged in research on the application of image recognition to plants and animals, as well as vision-language models.

LOTS of Fashion! Multi-Conditioning for Image Generation via Sketch-Text Pairing

Fashion design is a complex creative process that blends visual and textual expressions. Designers convey ideas through sketches, which define spatial structure and design elements, and textual descriptions, capturing material, texture, and stylistic details. In this paper, we present LOcalized Text and Sketch for fashion image generation (LOTS), an approach for compositional sketch-text based generation of complete fashion outlooks. LOTS leverages a global description with paired localized sketch + text information for conditioning and introduces a novel step-based merging strategy for diffusion adaptation.

First, a Modularized Pair-Centric representation encodes sketches and text into a shared latent space while preserving independent localized features; then, a Diffusion Pair Guidance phase integrates both local and global conditioning via attention-based guidance within the diffusion model’s multi-step denoising process. To validate our method, we build on Fashionpedia to release Sketchy, the first fashion dataset where multiple text-sketch pairs are provided per image. Quantitative results show LOTS achieves state-of-the-art image generation performance on both global and localized metrics, while qualitative examples and a human evaluation study highlight its unprecedented level of design customization.

About the Speaker

Federico Girella is a third-year Ph.D. student at the University of Verona (Italy), supervised by Prof. Marco Cristani, with expected graduation in May 2026. His research involves joint representations in the Image and Language multi-modal domain, working with deep neural networks such as (Large) Vision and Language Models and Text-to-Image Generative Models. His main body of work focuses on Text-to-Image Retrieval and Generation in the Fashion domain.

ProtoMedX: Explainable Multi-Modal Prototype Learning for Bone Health Assessment

Early detection of osteoporosis and osteopenia is critical, yet most AI models for bone health rely solely on imaging and offer little transparency into their decisions. In this talk, I will present ProtoMedX, the first prototype-based framework that combines lumbar spine DEXA scans with patient clinical records to deliver accurate and inherently explainable predictions.

Unlike black-box deep networks, ProtoMedX classifies patients by comparing them to learned case-based prototypes, mirroring how clinicians reason in practice. Our method not only achieves state-of-the-art accuracy on a real NHS dataset of 4,160 patients but also provides clear, interpretable explanations aligned with the upcoming EU AI Act requirements for high-risk medical AI. Beyond bone health, this work illustrates how prototype learning can make multi-modal AI both powerful and transparent, offering a blueprint for other safety-critical domains.

About the Speaker

Alvaro Lopez is a PhD candidate in Explainable AI at Lancaster University and an AI Research Associate at J.P. Morgan in London. His research focuses on prototype-based learning, multi-modal AI, and AI security. He has led projects on medical AI, fraud detection, and adversarial robustness, with applications ranging from healthcare to financial systems.

CLASP: Adaptive Spectral Clustering for Unsupervised Per-Image Segmentation

We introduce CLASP (Clustering via Adaptive Spectral Processing), a lightweight framework for unsupervised image segmentation that operates without any labeled data or fine-tuning. CLASP first extracts per-patch features using a self-supervised ViT encoder (DINO); then, it builds an affinity matrix and applies spectral clustering. To avoid manual tuning, we select the segment count automatically with a eigengap-silhouette search, and we sharpen the boundaries with a fully connected DenseCRF. Despite its simplicity and training-free nature, CLASP attains competitive mIoU and pixel-accuracy on COCO-Stuff and ADE20K, matching recent unsupervised baselines. The zero-training design makes CLASP a strong, easily reproducible baseline for large unannotated corpora—especially common in digital advertising and marketing workflows such as brand-safety screening, creative asset curation, and social-media content moderation.

About the Speaker

Max Curie is a Research Scientist at Integral Ad Science, building fast, lightweight solutions for brand safety, multi-media classification, and recommendation systems. As a former nuclear physicist at Princeton University, he brings rigorous analytical thinking and modeling discipline from his physics background to advance ad tech.

Nov 19 - Best of ICCV (Day 1)