talk-data.com
Activities & events
| Title & Speakers | Event |
|---|---|
|
Feb 5 - AI, ML and Computer Vision Meetup
2026-02-05 · 17:00
Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches. About the Speaker Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception. Data-Centric Lessons To Improve Speech-Language Pretraining Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs. We focus on three research questions fundamental to speech-language pretraining data:
We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs. About the Speaker Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence. A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable. Making Computer Vision Models Faster: An Introduction to TensorRT Optimization Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready. About the Speaker Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents. |
Feb 5 - AI, ML and Computer Vision Meetup
|
|
Feb 5 - AI, ML and Computer Vision Meetup
2026-02-05 · 17:00
Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches. About the Speaker Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception. Data-Centric Lessons To Improve Speech-Language Pretraining Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs. We focus on three research questions fundamental to speech-language pretraining data:
We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs. About the Speaker Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence. A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable. Making Computer Vision Models Faster: An Introduction to TensorRT Optimization Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready. About the Speaker Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents. |
Feb 5 - AI, ML and Computer Vision Meetup
|
|
Feb 5 - AI, ML and Computer Vision Meetup
2026-02-05 · 17:00
Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches. About the Speaker Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception. Data-Centric Lessons To Improve Speech-Language Pretraining Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs. We focus on three research questions fundamental to speech-language pretraining data:
We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs. About the Speaker Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence. A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable. Making Computer Vision Models Faster: An Introduction to TensorRT Optimization Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready. About the Speaker Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents. |
Feb 5 - AI, ML and Computer Vision Meetup
|
|
Feb 5 - AI, ML and Computer Vision Meetup
2026-02-05 · 17:00
Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches. About the Speaker Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception. Data-Centric Lessons To Improve Speech-Language Pretraining Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs. We focus on three research questions fundamental to speech-language pretraining data:
We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs. About the Speaker Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence. A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable. Making Computer Vision Models Faster: An Introduction to TensorRT Optimization Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready. About the Speaker Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents. |
Feb 5 - AI, ML and Computer Vision Meetup
|
|
Feb 5 - AI, ML and Computer Vision Meetup
2026-02-05 · 17:00
Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches. About the Speaker Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception. Data-Centric Lessons To Improve Speech-Language Pretraining Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs. We focus on three research questions fundamental to speech-language pretraining data:
We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs. About the Speaker Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence. A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable. Making Computer Vision Models Faster: An Introduction to TensorRT Optimization Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready. About the Speaker Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents. |
Feb 5 - AI, ML and Computer Vision Meetup
|
|
Feb 5 - AI, ML and Computer Vision Meetup
2026-02-05 · 17:00
Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches. About the Speaker Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception. Data-Centric Lessons To Improve Speech-Language Pretraining Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs. We focus on three research questions fundamental to speech-language pretraining data:
We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs. About the Speaker Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence. A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable. Making Computer Vision Models Faster: An Introduction to TensorRT Optimization Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready. About the Speaker Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents. |
Feb 5 - AI, ML and Computer Vision Meetup
|
|
Feb 5 - AI, ML and Computer Vision Meetup
2026-02-05 · 17:00
Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches. About the Speaker Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception. Data-Centric Lessons To Improve Speech-Language Pretraining Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs. We focus on three research questions fundamental to speech-language pretraining data:
We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs. About the Speaker Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence. A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable. Making Computer Vision Models Faster: An Introduction to TensorRT Optimization Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready. About the Speaker Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents. |
Feb 5 - AI, ML and Computer Vision Meetup
|
|
Feb 5 - AI, ML and Computer Vision Meetup
2026-02-05 · 17:00
Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches. About the Speaker Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception. Data-Centric Lessons To Improve Speech-Language Pretraining Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs. We focus on three research questions fundamental to speech-language pretraining data:
We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs. About the Speaker Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence. A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable. Making Computer Vision Models Faster: An Introduction to TensorRT Optimization Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready. About the Speaker Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents. |
Feb 5 - AI, ML and Computer Vision Meetup
|
|
From Data to Deployment: Building Production AI Systems
2026-01-22 · 18:00
PyData Wolverhampton Launch Event: From Data to Deployment Join us for the inaugural PyData Wolverhampton meetup! We're bringing together data scientists, engineers, AI practitioners, and anyone interested in Python and data science. What to Expect: This event features two practical talks on building AI systems that work in production: Talk 1: "From Demos to Deployed: Building AI Systems That Work, and Work Right" Speaker: Stephen Toriola (Software & AI Engineer at Compare the Market) Explore how AI has evolved from simple demos to production systems, what makes AI work in real-world applications, and how to build responsibly. Talk 2: "Building AI Right: Ethics and Implementation in Practice" Speaker: Nazeh Abel (AI Consultant at Medallion Technologies) Practical insights on implementing AI ethically, common pitfalls to avoid, and making better decisions when building AI systems. Agenda:
What to Bring: Just yourself! No laptop or preparation needed. Bring business cards if you'd like to connect with other attendees. Food & Drinks: Free pizza and soft drinks provided. How to Find Us: University of Wolverhampton Science Park, Wolverhampton WV10 9RU By Public Transport: From Wolverhampton train station, walk 5 minutes to the bus station. Take bus 32 or 33, ride for 7 stops (approximately 12 minutes) and drop off at Stafford Road. Walk 5 minutes to the Science Park. By Taxi: 7-minute drive from Wolverhampton train/bus station. By Car: Free parking available on-site. Accessibility: The venue is on the ground floor and fully accessible. We'll have PyData signage at the entrance to help you find us. Who Should Attend: Data scientists, data analysts, machine learning engineers, software developers, students, and anyone interested in Python, data science, or AI. All skill levels welcome! PyData Wolverhampton is part of the global PyData network, supported by NumFOCUS. We're building a community for data professionals in the Black Country. Follow us on LinkedIn: [PyData Wolverhampton] See you there! |
From Data to Deployment: Building Production AI Systems
|
|
From Experiment to Enterprise: Operationalizing AI in 2026
2025-12-11 · 16:00
By 2026, the AI landscape has shifted from experimentation to expectation. Enterprises are no longer asking whether they should adopt AI—they’re asking how to operationalize it responsibly, reliably, and at scale. The organizations pulling ahead are the ones investing in infrastructure that treats AI not as a lab experiment, but as a mission-critical capability. In this webinar, we’ll break down what it really takes to run AI in production today—where models change fast, data moves continuously, and stakeholders demand both innovation and accountability. We’ll explore what “enterprise-grade AI” looks like in practice, how to bake governance and observability into every layer of your architecture, and why a modern API platform is emerging as the backbone of real-world AI systems. What you’ll learn:
|
From Experiment to Enterprise: Operationalizing AI in 2026
|
|
From Experiment to Enterprise: Operationalizing AI in 2026
2025-12-11 · 16:00
By 2026, the AI landscape has shifted from experimentation to expectation. Enterprises are no longer asking whether they should adopt AI—they’re asking how to operationalize it responsibly, reliably, and at scale. The organizations pulling ahead are the ones investing in infrastructure that treats AI not as a lab experiment, but as a mission-critical capability. In this webinar, we’ll break down what it really takes to run AI in production today—where models change fast, data moves continuously, and stakeholders demand both innovation and accountability. We’ll explore what “enterprise-grade AI” looks like in practice, how to bake governance and observability into every layer of your architecture, and why a modern API platform is emerging as the backbone of real-world AI systems. What you’ll learn:
|
From Experiment to Enterprise: Operationalizing AI in 2026
|
|
From Experiment to Enterprise: Operationalizing AI in 2026
2025-12-11 · 16:00
By 2026, the AI landscape has shifted from experimentation to expectation. Enterprises are no longer asking whether they should adopt AI—they’re asking how to operationalize it responsibly, reliably, and at scale. The organizations pulling ahead are the ones investing in infrastructure that treats AI not as a lab experiment, but as a mission-critical capability. In this webinar, we’ll break down what it really takes to run AI in production today—where models change fast, data moves continuously, and stakeholders demand both innovation and accountability. We’ll explore what “enterprise-grade AI” looks like in practice, how to bake governance and observability into every layer of your architecture, and why a modern API platform is emerging as the backbone of real-world AI systems. What you’ll learn:
|
From Experiment to Enterprise: Operationalizing AI in 2026
|
|
From Experiment to Enterprise: Operationalizing AI in 2026
2025-12-11 · 16:00
By 2026, the AI landscape has shifted from experimentation to expectation. Enterprises are no longer asking whether they should adopt AI—they’re asking how to operationalize it responsibly, reliably, and at scale. The organizations pulling ahead are the ones investing in infrastructure that treats AI not as a lab experiment, but as a mission-critical capability. In this webinar, we’ll break down what it really takes to run AI in production today—where models change fast, data moves continuously, and stakeholders demand both innovation and accountability. We’ll explore what “enterprise-grade AI” looks like in practice, how to bake governance and observability into every layer of your architecture, and why a modern API platform is emerging as the backbone of real-world AI systems. What you’ll learn:
|
From Experiment to Enterprise: Operationalizing AI in 2026
|
|
From Experiment to Enterprise: Operationalizing AI in 2026
2025-12-11 · 16:00
By 2026, the AI landscape has shifted from experimentation to expectation. Enterprises are no longer asking whether they should adopt AI—they’re asking how to operationalize it responsibly, reliably, and at scale. The organizations pulling ahead are the ones investing in infrastructure that treats AI not as a lab experiment, but as a mission-critical capability. In this webinar, we’ll break down what it really takes to run AI in production today—where models change fast, data moves continuously, and stakeholders demand both innovation and accountability. We’ll explore what “enterprise-grade AI” looks like in practice, how to bake governance and observability into every layer of your architecture, and why a modern API platform is emerging as the backbone of real-world AI systems. What you’ll learn:
|
From Experiment to Enterprise: Operationalizing AI in 2026
|
|
Dec 4 - AI, ML and Computer Vision Meetup
2025-12-04 · 20:00
Join the virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Date and Time Dec 4, 2025 9:00 - 11:00 AM Pacific Benchmarking Vision-Language Models for Autonomous Driving Safety This workshop introduces a unified framework for evaluating how vision-language models handle driving safety. Using an enhanced BDDOIA dataset with scene, weather, and action labels, we benchmark models like Gemini, FastVLM, and Qwen within FiftyOne. Our results show consistent blind spots where models misjudge unsafe situations, highlighting the need for safer and more interpretable AI systems for autonomous driving. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI — making technology simple\, accessible\, and reliable. TrueRice: AI-Powered Visual Quality Control for Rice Grains and Beyond at Scale Agriculture remains one of the most under-digitized industries, yet grain quality control defines pricing, trust, and livelihoods for millions. TrueRice is an AI-powered analyzer that turns a flatbed scanner into a high-precision, 30-second QC engine, replacing the 2+ hours and subjectivity of manual quality inspection. Built on a state-of-the-art 8K image processing pipeline with SAHI (Slicing Aided Hyper Inference), it detects fine-grained kernel defects at scale with high accuracy across grain size, shape, breakage, discoloration, and chalkiness. Now being extended to maize and coffee, TrueRice showcases how cross-crop transfer learning and frugal AI engineering can scale precision QC for farmers, millers, and exporters. This talk will cover the design principles, model architecture choices, and a live demonstration, while addressing challenges in data variability, regulatory standards, and cross-crop adaptation. About the Speaker Sai Jeevan Puchakayala is an Interdisciplinary AI/ML Consultant, Researcher, and Tech Lead at Sustainable Living Lab (SL2) India, where he drives development of applied AI solutions for agriculture, climate resilience, and sustainability. He led the engineering of TrueRice, an award-winning grain quality analyzer that won India’s first International Agri Hackathon 2025. WeedNet: A Foundation Model Based Global-to-Local AI Approach for Real-Time Weed Species Identification and Classification Early and accurate weed identification is critical for effective management, yet current AI-based approaches face challenges due to limited expert-verified datasets and the high variability in weed morphology across species and growth stages. We present WeedNet, a global-scale weed identification model designed to recognize a wide range of species, including noxious and invasive plants. WeedNet is an end-to-end real-time pipeline that integrates self-supervised pretraining, fine-tuning, and trustworthiness strategies to improve both accuracy and reliability. Building on this foundation, we introduce a Global-to-Local strategy: while the Global WeedNet model provides broad generalization, we fine-tune local variants such as Iowa WeedNet to target region-specific weed communities in the U.S. Midwest. Our evaluation addresses both intra-species diversity (different growth stages) and inter-species similarity (look-alike species), ensuring robust performance under real-world variability. We further validate WeedNet on images captured by drones and ground rovers, demonstrating its potential for deployment in robotic platforms. Beyond field applications, we integrate a conversational AI to enable practical decision-support tools for farmers, agronomists, researchers, and land managers worldwide. These advances position WeedNet as a foundational model for intelligent, scalable, and regionally adaptable weed management and ecological conservation. About the Speaker Timilehin Ayanlade is a Ph.D. candidate in the Self-aware Complex Systems Laboratory at Iowa State University, where his research focuses on developing machine learning and computer vision methods for agricultural applications. His work integrates multimodal data across ground-based sensing, UAV, and satellite with advanced AI models to tackle challenges in weed identification, crop monitoring, and crop yield prediction. Memory Matters: Early Alzheimer’s Detection with AI-Powered Mobile Tools Advancements in artificial intelligence and mobile technology are transforming the landscape of neurodegenerative disease detection, offering new hope for early intervention in Alzheimer’s. By integrating machine learning algorithms with everyday mobile devices, we are entering a new era of accessible, scalable, and non-invasive tools for early Alzheimer’s detection In this talk, we’ll cover the potential of AI in health care systems, ethical considerations, plus an architecture, model, datasets and framework deep dive. About the Speaker Reetam Biswas has more than 18 years of experience in the IT industry as a software architect, currently working on AI. |
Dec 4 - AI, ML and Computer Vision Meetup
|
|
Dec 4 - AI, ML and Computer Vision Meetup
2025-12-04 · 20:00
Join the virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Date and Time Dec 4, 2025 9:00 - 11:00 AM Pacific Benchmarking Vision-Language Models for Autonomous Driving Safety This workshop introduces a unified framework for evaluating how vision-language models handle driving safety. Using an enhanced BDDOIA dataset with scene, weather, and action labels, we benchmark models like Gemini, FastVLM, and Qwen within FiftyOne. Our results show consistent blind spots where models misjudge unsafe situations, highlighting the need for safer and more interpretable AI systems for autonomous driving. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI — making technology simple\, accessible\, and reliable. TrueRice: AI-Powered Visual Quality Control for Rice Grains and Beyond at Scale Agriculture remains one of the most under-digitized industries, yet grain quality control defines pricing, trust, and livelihoods for millions. TrueRice is an AI-powered analyzer that turns a flatbed scanner into a high-precision, 30-second QC engine, replacing the 2+ hours and subjectivity of manual quality inspection. Built on a state-of-the-art 8K image processing pipeline with SAHI (Slicing Aided Hyper Inference), it detects fine-grained kernel defects at scale with high accuracy across grain size, shape, breakage, discoloration, and chalkiness. Now being extended to maize and coffee, TrueRice showcases how cross-crop transfer learning and frugal AI engineering can scale precision QC for farmers, millers, and exporters. This talk will cover the design principles, model architecture choices, and a live demonstration, while addressing challenges in data variability, regulatory standards, and cross-crop adaptation. About the Speaker Sai Jeevan Puchakayala is an Interdisciplinary AI/ML Consultant, Researcher, and Tech Lead at Sustainable Living Lab (SL2) India, where he drives development of applied AI solutions for agriculture, climate resilience, and sustainability. He led the engineering of TrueRice, an award-winning grain quality analyzer that won India’s first International Agri Hackathon 2025. WeedNet: A Foundation Model Based Global-to-Local AI Approach for Real-Time Weed Species Identification and Classification Early and accurate weed identification is critical for effective management, yet current AI-based approaches face challenges due to limited expert-verified datasets and the high variability in weed morphology across species and growth stages. We present WeedNet, a global-scale weed identification model designed to recognize a wide range of species, including noxious and invasive plants. WeedNet is an end-to-end real-time pipeline that integrates self-supervised pretraining, fine-tuning, and trustworthiness strategies to improve both accuracy and reliability. Building on this foundation, we introduce a Global-to-Local strategy: while the Global WeedNet model provides broad generalization, we fine-tune local variants such as Iowa WeedNet to target region-specific weed communities in the U.S. Midwest. Our evaluation addresses both intra-species diversity (different growth stages) and inter-species similarity (look-alike species), ensuring robust performance under real-world variability. We further validate WeedNet on images captured by drones and ground rovers, demonstrating its potential for deployment in robotic platforms. Beyond field applications, we integrate a conversational AI to enable practical decision-support tools for farmers, agronomists, researchers, and land managers worldwide. These advances position WeedNet as a foundational model for intelligent, scalable, and regionally adaptable weed management and ecological conservation. About the Speaker Timilehin Ayanlade is a Ph.D. candidate in the Self-aware Complex Systems Laboratory at Iowa State University, where his research focuses on developing machine learning and computer vision methods for agricultural applications. His work integrates multimodal data across ground-based sensing, UAV, and satellite with advanced AI models to tackle challenges in weed identification, crop monitoring, and crop yield prediction. Memory Matters: Early Alzheimer’s Detection with AI-Powered Mobile Tools Advancements in artificial intelligence and mobile technology are transforming the landscape of neurodegenerative disease detection, offering new hope for early intervention in Alzheimer’s. By integrating machine learning algorithms with everyday mobile devices, we are entering a new era of accessible, scalable, and non-invasive tools for early Alzheimer’s detection In this talk, we’ll cover the potential of AI in health care systems, ethical considerations, plus an architecture, model, datasets and framework deep dive. About the Speaker Reetam Biswas has more than 18 years of experience in the IT industry as a software architect, currently working on AI. |
Dec 4 - AI, ML and Computer Vision Meetup
|
|
Dec 4 - AI, ML and Computer Vision Meetup
2025-12-04 · 17:00
Join the virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Date and Time Dec 4, 2025 9:00 - 11:00 AM Pacific Benchmarking Vision-Language Models for Autonomous Driving Safety This workshop introduces a unified framework for evaluating how vision-language models handle driving safety. Using an enhanced BDDOIA dataset with scene, weather, and action labels, we benchmark models like Gemini, FastVLM, and Qwen within FiftyOne. Our results show consistent blind spots where models misjudge unsafe situations, highlighting the need for safer and more interpretable AI systems for autonomous driving. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI — making technology simple\, accessible\, and reliable. TrueRice: AI-Powered Visual Quality Control for Rice Grains and Beyond at Scale Agriculture remains one of the most under-digitized industries, yet grain quality control defines pricing, trust, and livelihoods for millions. TrueRice is an AI-powered analyzer that turns a flatbed scanner into a high-precision, 30-second QC engine, replacing the 2+ hours and subjectivity of manual quality inspection. Built on a state-of-the-art 8K image processing pipeline with SAHI (Slicing Aided Hyper Inference), it detects fine-grained kernel defects at scale with high accuracy across grain size, shape, breakage, discoloration, and chalkiness. Now being extended to maize and coffee, TrueRice showcases how cross-crop transfer learning and frugal AI engineering can scale precision QC for farmers, millers, and exporters. This talk will cover the design principles, model architecture choices, and a live demonstration, while addressing challenges in data variability, regulatory standards, and cross-crop adaptation. About the Speaker Sai Jeevan Puchakayala is an Interdisciplinary AI/ML Consultant, Researcher, and Tech Lead at Sustainable Living Lab (SL2) India, where he drives development of applied AI solutions for agriculture, climate resilience, and sustainability. He led the engineering of TrueRice, an award-winning grain quality analyzer that won India’s first International Agri Hackathon 2025. WeedNet: A Foundation Model Based Global-to-Local AI Approach for Real-Time Weed Species Identification and Classification Early and accurate weed identification is critical for effective management, yet current AI-based approaches face challenges due to limited expert-verified datasets and the high variability in weed morphology across species and growth stages. We present WeedNet, a global-scale weed identification model designed to recognize a wide range of species, including noxious and invasive plants. WeedNet is an end-to-end real-time pipeline that integrates self-supervised pretraining, fine-tuning, and trustworthiness strategies to improve both accuracy and reliability. Building on this foundation, we introduce a Global-to-Local strategy: while the Global WeedNet model provides broad generalization, we fine-tune local variants such as Iowa WeedNet to target region-specific weed communities in the U.S. Midwest. Our evaluation addresses both intra-species diversity (different growth stages) and inter-species similarity (look-alike species), ensuring robust performance under real-world variability. We further validate WeedNet on images captured by drones and ground rovers, demonstrating its potential for deployment in robotic platforms. Beyond field applications, we integrate a conversational AI to enable practical decision-support tools for farmers, agronomists, researchers, and land managers worldwide. These advances position WeedNet as a foundational model for intelligent, scalable, and regionally adaptable weed management and ecological conservation. About the Speaker Timilehin Ayanlade is a Ph.D. candidate in the Self-aware Complex Systems Laboratory at Iowa State University, where his research focuses on developing machine learning and computer vision methods for agricultural applications. His work integrates multimodal data across ground-based sensing, UAV, and satellite with advanced AI models to tackle challenges in weed identification, crop monitoring, and crop yield prediction. Memory Matters: Early Alzheimer’s Detection with AI-Powered Mobile Tools Advancements in artificial intelligence and mobile technology are transforming the landscape of neurodegenerative disease detection, offering new hope for early intervention in Alzheimer’s. By integrating machine learning algorithms with everyday mobile devices, we are entering a new era of accessible, scalable, and non-invasive tools for early Alzheimer’s detection In this talk, we’ll cover the potential of AI in health care systems, ethical considerations, plus an architecture, model, datasets and framework deep dive. About the Speaker Reetam Biswas has more than 18 years of experience in the IT industry as a software architect, currently working on AI. |
Dec 4 - AI, ML and Computer Vision Meetup
|
|
Dec 4 - AI, ML and Computer Vision Meetup
2025-12-04 · 17:00
Join the virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Date and Time Dec 4, 2025 9:00 - 11:00 AM Pacific Benchmarking Vision-Language Models for Autonomous Driving Safety This workshop introduces a unified framework for evaluating how vision-language models handle driving safety. Using an enhanced BDDOIA dataset with scene, weather, and action labels, we benchmark models like Gemini, FastVLM, and Qwen within FiftyOne. Our results show consistent blind spots where models misjudge unsafe situations, highlighting the need for safer and more interpretable AI systems for autonomous driving. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI — making technology simple\, accessible\, and reliable. TrueRice: AI-Powered Visual Quality Control for Rice Grains and Beyond at Scale Agriculture remains one of the most under-digitized industries, yet grain quality control defines pricing, trust, and livelihoods for millions. TrueRice is an AI-powered analyzer that turns a flatbed scanner into a high-precision, 30-second QC engine, replacing the 2+ hours and subjectivity of manual quality inspection. Built on a state-of-the-art 8K image processing pipeline with SAHI (Slicing Aided Hyper Inference), it detects fine-grained kernel defects at scale with high accuracy across grain size, shape, breakage, discoloration, and chalkiness. Now being extended to maize and coffee, TrueRice showcases how cross-crop transfer learning and frugal AI engineering can scale precision QC for farmers, millers, and exporters. This talk will cover the design principles, model architecture choices, and a live demonstration, while addressing challenges in data variability, regulatory standards, and cross-crop adaptation. About the Speaker Sai Jeevan Puchakayala is an Interdisciplinary AI/ML Consultant, Researcher, and Tech Lead at Sustainable Living Lab (SL2) India, where he drives development of applied AI solutions for agriculture, climate resilience, and sustainability. He led the engineering of TrueRice, an award-winning grain quality analyzer that won India’s first International Agri Hackathon 2025. WeedNet: A Foundation Model Based Global-to-Local AI Approach for Real-Time Weed Species Identification and Classification Early and accurate weed identification is critical for effective management, yet current AI-based approaches face challenges due to limited expert-verified datasets and the high variability in weed morphology across species and growth stages. We present WeedNet, a global-scale weed identification model designed to recognize a wide range of species, including noxious and invasive plants. WeedNet is an end-to-end real-time pipeline that integrates self-supervised pretraining, fine-tuning, and trustworthiness strategies to improve both accuracy and reliability. Building on this foundation, we introduce a Global-to-Local strategy: while the Global WeedNet model provides broad generalization, we fine-tune local variants such as Iowa WeedNet to target region-specific weed communities in the U.S. Midwest. Our evaluation addresses both intra-species diversity (different growth stages) and inter-species similarity (look-alike species), ensuring robust performance under real-world variability. We further validate WeedNet on images captured by drones and ground rovers, demonstrating its potential for deployment in robotic platforms. Beyond field applications, we integrate a conversational AI to enable practical decision-support tools for farmers, agronomists, researchers, and land managers worldwide. These advances position WeedNet as a foundational model for intelligent, scalable, and regionally adaptable weed management and ecological conservation. About the Speaker Timilehin Ayanlade is a Ph.D. candidate in the Self-aware Complex Systems Laboratory at Iowa State University, where his research focuses on developing machine learning and computer vision methods for agricultural applications. His work integrates multimodal data across ground-based sensing, UAV, and satellite with advanced AI models to tackle challenges in weed identification, crop monitoring, and crop yield prediction. Memory Matters: Early Alzheimer’s Detection with AI-Powered Mobile Tools Advancements in artificial intelligence and mobile technology are transforming the landscape of neurodegenerative disease detection, offering new hope for early intervention in Alzheimer’s. By integrating machine learning algorithms with everyday mobile devices, we are entering a new era of accessible, scalable, and non-invasive tools for early Alzheimer’s detection In this talk, we’ll cover the potential of AI in health care systems, ethical considerations, plus an architecture, model, datasets and framework deep dive. About the Speaker Reetam Biswas has more than 18 years of experience in the IT industry as a software architect, currently working on AI. |
Dec 4 - AI, ML and Computer Vision Meetup
|
|
Dec 4 - AI, ML and Computer Vision Meetup
2025-12-04 · 17:00
Join the virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Date and Time Dec 4, 2025 9:00 - 11:00 AM Pacific Benchmarking Vision-Language Models for Autonomous Driving Safety This workshop introduces a unified framework for evaluating how vision-language models handle driving safety. Using an enhanced BDDOIA dataset with scene, weather, and action labels, we benchmark models like Gemini, FastVLM, and Qwen within FiftyOne. Our results show consistent blind spots where models misjudge unsafe situations, highlighting the need for safer and more interpretable AI systems for autonomous driving. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI — making technology simple\, accessible\, and reliable. TrueRice: AI-Powered Visual Quality Control for Rice Grains and Beyond at Scale Agriculture remains one of the most under-digitized industries, yet grain quality control defines pricing, trust, and livelihoods for millions. TrueRice is an AI-powered analyzer that turns a flatbed scanner into a high-precision, 30-second QC engine, replacing the 2+ hours and subjectivity of manual quality inspection. Built on a state-of-the-art 8K image processing pipeline with SAHI (Slicing Aided Hyper Inference), it detects fine-grained kernel defects at scale with high accuracy across grain size, shape, breakage, discoloration, and chalkiness. Now being extended to maize and coffee, TrueRice showcases how cross-crop transfer learning and frugal AI engineering can scale precision QC for farmers, millers, and exporters. This talk will cover the design principles, model architecture choices, and a live demonstration, while addressing challenges in data variability, regulatory standards, and cross-crop adaptation. About the Speaker Sai Jeevan Puchakayala is an Interdisciplinary AI/ML Consultant, Researcher, and Tech Lead at Sustainable Living Lab (SL2) India, where he drives development of applied AI solutions for agriculture, climate resilience, and sustainability. He led the engineering of TrueRice, an award-winning grain quality analyzer that won India’s first International Agri Hackathon 2025. WeedNet: A Foundation Model Based Global-to-Local AI Approach for Real-Time Weed Species Identification and Classification Early and accurate weed identification is critical for effective management, yet current AI-based approaches face challenges due to limited expert-verified datasets and the high variability in weed morphology across species and growth stages. We present WeedNet, a global-scale weed identification model designed to recognize a wide range of species, including noxious and invasive plants. WeedNet is an end-to-end real-time pipeline that integrates self-supervised pretraining, fine-tuning, and trustworthiness strategies to improve both accuracy and reliability. Building on this foundation, we introduce a Global-to-Local strategy: while the Global WeedNet model provides broad generalization, we fine-tune local variants such as Iowa WeedNet to target region-specific weed communities in the U.S. Midwest. Our evaluation addresses both intra-species diversity (different growth stages) and inter-species similarity (look-alike species), ensuring robust performance under real-world variability. We further validate WeedNet on images captured by drones and ground rovers, demonstrating its potential for deployment in robotic platforms. Beyond field applications, we integrate a conversational AI to enable practical decision-support tools for farmers, agronomists, researchers, and land managers worldwide. These advances position WeedNet as a foundational model for intelligent, scalable, and regionally adaptable weed management and ecological conservation. About the Speaker Timilehin Ayanlade is a Ph.D. candidate in the Self-aware Complex Systems Laboratory at Iowa State University, where his research focuses on developing machine learning and computer vision methods for agricultural applications. His work integrates multimodal data across ground-based sensing, UAV, and satellite with advanced AI models to tackle challenges in weed identification, crop monitoring, and crop yield prediction. Memory Matters: Early Alzheimer’s Detection with AI-Powered Mobile Tools Advancements in artificial intelligence and mobile technology are transforming the landscape of neurodegenerative disease detection, offering new hope for early intervention in Alzheimer’s. By integrating machine learning algorithms with everyday mobile devices, we are entering a new era of accessible, scalable, and non-invasive tools for early Alzheimer’s detection In this talk, we’ll cover the potential of AI in health care systems, ethical considerations, plus an architecture, model, datasets and framework deep dive. About the Speaker Reetam Biswas has more than 18 years of experience in the IT industry as a software architect, currently working on AI. |
Dec 4 - AI, ML and Computer Vision Meetup
|
|
Dec 4 - AI, ML and Computer Vision Meetup
2025-12-04 · 17:00
Join the virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision. Date and Time Dec 4, 2025 9:00 - 11:00 AM Pacific Benchmarking Vision-Language Models for Autonomous Driving Safety This workshop introduces a unified framework for evaluating how vision-language models handle driving safety. Using an enhanced BDDOIA dataset with scene, weather, and action labels, we benchmark models like Gemini, FastVLM, and Qwen within FiftyOne. Our results show consistent blind spots where models misjudge unsafe situations, highlighting the need for safer and more interpretable AI systems for autonomous driving. About the Speaker Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI — making technology simple\, accessible\, and reliable. TrueRice: AI-Powered Visual Quality Control for Rice Grains and Beyond at Scale Agriculture remains one of the most under-digitized industries, yet grain quality control defines pricing, trust, and livelihoods for millions. TrueRice is an AI-powered analyzer that turns a flatbed scanner into a high-precision, 30-second QC engine, replacing the 2+ hours and subjectivity of manual quality inspection. Built on a state-of-the-art 8K image processing pipeline with SAHI (Slicing Aided Hyper Inference), it detects fine-grained kernel defects at scale with high accuracy across grain size, shape, breakage, discoloration, and chalkiness. Now being extended to maize and coffee, TrueRice showcases how cross-crop transfer learning and frugal AI engineering can scale precision QC for farmers, millers, and exporters. This talk will cover the design principles, model architecture choices, and a live demonstration, while addressing challenges in data variability, regulatory standards, and cross-crop adaptation. About the Speaker Sai Jeevan Puchakayala is an Interdisciplinary AI/ML Consultant, Researcher, and Tech Lead at Sustainable Living Lab (SL2) India, where he drives development of applied AI solutions for agriculture, climate resilience, and sustainability. He led the engineering of TrueRice, an award-winning grain quality analyzer that won India’s first International Agri Hackathon 2025. WeedNet: A Foundation Model Based Global-to-Local AI Approach for Real-Time Weed Species Identification and Classification Early and accurate weed identification is critical for effective management, yet current AI-based approaches face challenges due to limited expert-verified datasets and the high variability in weed morphology across species and growth stages. We present WeedNet, a global-scale weed identification model designed to recognize a wide range of species, including noxious and invasive plants. WeedNet is an end-to-end real-time pipeline that integrates self-supervised pretraining, fine-tuning, and trustworthiness strategies to improve both accuracy and reliability. Building on this foundation, we introduce a Global-to-Local strategy: while the Global WeedNet model provides broad generalization, we fine-tune local variants such as Iowa WeedNet to target region-specific weed communities in the U.S. Midwest. Our evaluation addresses both intra-species diversity (different growth stages) and inter-species similarity (look-alike species), ensuring robust performance under real-world variability. We further validate WeedNet on images captured by drones and ground rovers, demonstrating its potential for deployment in robotic platforms. Beyond field applications, we integrate a conversational AI to enable practical decision-support tools for farmers, agronomists, researchers, and land managers worldwide. These advances position WeedNet as a foundational model for intelligent, scalable, and regionally adaptable weed management and ecological conservation. About the Speaker Timilehin Ayanlade is a Ph.D. candidate in the Self-aware Complex Systems Laboratory at Iowa State University, where his research focuses on developing machine learning and computer vision methods for agricultural applications. His work integrates multimodal data across ground-based sensing, UAV, and satellite with advanced AI models to tackle challenges in weed identification, crop monitoring, and crop yield prediction. Memory Matters: Early Alzheimer’s Detection with AI-Powered Mobile Tools Advancements in artificial intelligence and mobile technology are transforming the landscape of neurodegenerative disease detection, offering new hope for early intervention in Alzheimer’s. By integrating machine learning algorithms with everyday mobile devices, we are entering a new era of accessible, scalable, and non-invasive tools for early Alzheimer’s detection In this talk, we’ll cover the potential of AI in health care systems, ethical considerations, plus an architecture, model, datasets and framework deep dive. About the Speaker Reetam Biswas has more than 18 years of experience in the IT industry as a software architect, currently working on AI. |
Dec 4 - AI, ML and Computer Vision Meetup
|