talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (346 results)

See all 346 →

Companies (1 result)

Michaels 1 speaker
Chief Information Security Officer

Activities & events

Title & Speakers Event
Holy Tech Night 2026 2025-12-17 · 17:00

Together with Global AI Berlin

Hey guys,

it's that time again. After last year's successful Holy Tech Night, here we go again. Holy Tech Night organized by Janek. Three speakers will once again provide us with nerdy stuff for the evening, covering three different topics from the Microsoft world. During the breaks and afterwards, mulled wine and gingerbread will be available to see out the year in style.

Anyone who wants to is welcome to come in a suitable outfit - some of you still have an ugly sweater at home. 🫣😆

Here ist the agenda:

18:00 doors open 18:15 first talk 19:00 short break 19:05 second talk 19:50 short break 20:00 third talk 20:45 networking

Our speakers and its topics this year:

Michael Greth Psst ... Private Local AI – Turn your Mac into a Private AI Power House

As the year winds down and things get a little quieter, it’s the perfect moment to take a fresh look at what my computer (MacMini M4) can already do. In this session, you’ll discover how to turn it into a private AI powerhouse — without cloud dependencies, subscriptions, or data leaving your device. We’ll explore Small Language Models, LM Studio, MacWhisper, and AnythingLLM, and walk through practical examples for local transcription, document analysis, and fast on-device reasoning. By the end, you’ll know exactly what to play with between Christmas and New Year.

Thomas Stensitzki Holy Hybrid: The Last Jedi of the Exchange World

Exchange Hybrid is more than a temporary solution. It is the strategic bridge between on-premises and the cloud. In this session, you’ll learn why hybrid setups remain relevant in 2025, the common pitfalls to avoid, and how to implement best practices for a smooth migration. We will dive into authentication, connectivity, and the future of Exchange in a cloud-first world. Perfect for IT pros who want to master the middle ground without getting lost in complexity.

Markus Raatz Fabric IQ

With the new Fabric IQ, Microsoft is taking a big leap forward: rather than IT specialists, it is the business users who define and describe how the business works using an ontology, via entities with properties and relationships between them. They then explain to Microsoft Fabric where the relevant data can be found, and a data agent can then be used to query all aspects of the business and analyze them. And because it knows the entire business with its rules and not just one department, it can automatically make decisions that are almost as good as those made by its human colleagues. Not quite clear? No matter, this presentation has some good examples.

Holy Tech Night 2026

Weave-cli is a fast CLI for Weaviate, Milvus, Chroma, Qdrant, and other vector DBs to help view, list, create, delete, and search collections and documents in collections for development, test, and debugging purposes. Join us in this session and hear how why Max decided to create it and learn about its strengths and weaknesses for powering vector search for RAG, similarity search and other workloads

About the presenter Michael Maximilien (aka ‘Max’ or 'Dr. Max') is the founder and CEO of a “stealth” AI Agents startup in Silicon Valley. Before that he was an IBM Distinguished Engineer leading Open Source teams for AI agents and multi-agent systems. His career focused on pioneering software platforms, from early web services, to cloud computing, and more recently multi-agent systems. Max was a key OSS leader in the development of Cloud Foundry and Knative — the Kubernetes serverless platform. His expertise in distributed systems is backed by over 100 published papers and 20 patents; and his PhD in computer science in 2005 was focused on multi-agent systems. Max is also a retired Ironman triathlete and an award-winning photographer.

About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players.

Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord.

[AI Alliance] Introducing weave-cli - A fast CLI for vector search

Weave-cli is a fast CLI for Weaviate, Milvus, Chroma, Qdrant, and other vector DBs to help view, list, create, delete, and search collections and documents in collections for development, test, and debugging purposes. Join us in this session and hear how why Max decided to create it and learn about its strengths and weaknesses for powering vector search for RAG, similarity search and other workloads

About the presenter Michael Maximilien (aka ‘Max’ or 'Dr. Max') is the founder and CEO of a “stealth” AI Agents startup in Silicon Valley. Before that he was an IBM Distinguished Engineer leading Open Source teams for AI agents and multi-agent systems. His career focused on pioneering software platforms, from early web services, to cloud computing, and more recently multi-agent systems. Max was a key OSS leader in the development of Cloud Foundry and Knative — the Kubernetes serverless platform. His expertise in distributed systems is backed by over 100 published papers and 20 patents; and his PhD in computer science in 2005 was focused on multi-agent systems. Max is also a retired Ironman triathlete and an award-winning photographer.

About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players.

Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord.

[AI Alliance] Introducing weave-cli - A fast CLI for vector search

Weave-cli is a fast CLI for Weaviate, Milvus, Chroma, Qdrant, and other vector DBs to help view, list, create, delete, and search collections and documents in collections for development, test, and debugging purposes. Join us in this session and hear how why Max decided to create it and learn about its strengths and weaknesses for powering vector search for RAG, similarity search and other workloads

About the presenter Michael Maximilien (aka ‘Max’ or 'Dr. Max') is the founder and CEO of a “stealth” AI Agents startup in Silicon Valley. Before that he was an IBM Distinguished Engineer leading Open Source teams for AI agents and multi-agent systems. His career focused on pioneering software platforms, from early web services, to cloud computing, and more recently multi-agent systems. Max was a key OSS leader in the development of Cloud Foundry and Knative — the Kubernetes serverless platform. His expertise in distributed systems is backed by over 100 published papers and 20 patents; and his PhD in computer science in 2005 was focused on multi-agent systems. Max is also a retired Ironman triathlete and an award-winning photographer.

About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players.

Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord.

[AI Alliance] Introducing weave-cli - A fast CLI for vector search

Weave-cli is a fast CLI for Weaviate, Milvus, Chroma, Qdrant, and other vector DBs to help view, list, create, delete, and search collections and documents in collections for development, test, and debugging purposes. Join us in this session and hear how why Max decided to create it and learn about its strengths and weaknesses for powering vector search for RAG, similarity search and other workloads

About the presenter Michael Maximilien (aka ‘Max’ or 'Dr. Max') is the founder and CEO of a “stealth” AI Agents startup in Silicon Valley. Before that he was an IBM Distinguished Engineer leading Open Source teams for AI agents and multi-agent systems. His career focused on pioneering software platforms, from early web services, to cloud computing, and more recently multi-agent systems. Max was a key OSS leader in the development of Cloud Foundry and Knative — the Kubernetes serverless platform. His expertise in distributed systems is backed by over 100 published papers and 20 patents; and his PhD in computer science in 2005 was focused on multi-agent systems. Max is also a retired Ironman triathlete and an award-winning photographer.

About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players.

Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord.

[AI Alliance] Introducing weave-cli - A fast CLI for vector search

Weave-cli is a fast CLI for Weaviate, Milvus, Chroma, Qdrant, and other vector DBs to help view, list, create, delete, and search collections and documents in collections for development, test, and debugging purposes. Join us in this session and hear how why Max decided to create it and learn about its strengths and weaknesses for powering vector search for RAG, similarity search and other workloads

About the presenter Michael Maximilien (aka ‘Max’ or 'Dr. Max') is the founder and CEO of a “stealth” AI Agents startup in Silicon Valley. Before that he was an IBM Distinguished Engineer leading Open Source teams for AI agents and multi-agent systems. His career focused on pioneering software platforms, from early web services, to cloud computing, and more recently multi-agent systems. Max was a key OSS leader in the development of Cloud Foundry and Knative — the Kubernetes serverless platform. His expertise in distributed systems is backed by over 100 published papers and 20 patents; and his PhD in computer science in 2005 was focused on multi-agent systems. Max is also a retired Ironman triathlete and an award-winning photographer.

About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players.

Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord.

[AI Alliance] Introducing weave-cli - A fast CLI for vector search

Weave-cli is a fast CLI for Weaviate, Milvus, Chroma, Qdrant, and other vector DBs to help view, list, create, delete, and search collections and documents in collections for development, test, and debugging purposes. Join us in this session and hear how why Max decided to create it and learn about its strengths and weaknesses for powering vector search for RAG, similarity search and other workloads

About the presenter Michael Maximilien (aka ‘Max’ or 'Dr. Max') is the founder and CEO of a “stealth” AI Agents startup in Silicon Valley. Before that he was an IBM Distinguished Engineer leading Open Source teams for AI agents and multi-agent systems. His career focused on pioneering software platforms, from early web services, to cloud computing, and more recently multi-agent systems. Max was a key OSS leader in the development of Cloud Foundry and Knative — the Kubernetes serverless platform. His expertise in distributed systems is backed by over 100 published papers and 20 patents; and his PhD in computer science in 2005 was focused on multi-agent systems. Max is also a retired Ironman triathlete and an award-winning photographer.

About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players.

Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord.

[AI Alliance] Introducing weave-cli - A fast CLI for vector search

Weave-cli is a fast CLI for Weaviate, Milvus, Chroma, Qdrant, and other vector DBs to help view, list, create, delete, and search collections and documents in collections for development, test, and debugging purposes. Join us in this session and hear how why Max decided to create it and learn about its strengths and weaknesses for powering vector search for RAG, similarity search and other workloads

About the presenter Michael Maximilien (aka ‘Max’ or 'Dr. Max') is the founder and CEO of a “stealth” AI Agents startup in Silicon Valley. Before that he was an IBM Distinguished Engineer leading Open Source teams for AI agents and multi-agent systems. His career focused on pioneering software platforms, from early web services, to cloud computing, and more recently multi-agent systems. Max was a key OSS leader in the development of Cloud Foundry and Knative — the Kubernetes serverless platform. His expertise in distributed systems is backed by over 100 published papers and 20 patents; and his PhD in computer science in 2005 was focused on multi-agent systems. Max is also a retired Ironman triathlete and an award-winning photographer.

About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players.

Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord.

[AI Alliance] Introducing weave-cli - A fast CLI for vector search

Weave-cli is a fast CLI for Weaviate, Milvus, Chroma, Qdrant, and other vector DBs to help view, list, create, delete, and search collections and documents in collections for development, test, and debugging purposes. Join us in this session and hear how why Max decided to create it and learn about its strengths and weaknesses for powering vector search for RAG, similarity search and other workloads

About the presenter Michael Maximilien (aka ‘Max’ or 'Dr. Max') is the founder and CEO of a “stealth” AI Agents startup in Silicon Valley. Before that he was an IBM Distinguished Engineer leading Open Source teams for AI agents and multi-agent systems. His career focused on pioneering software platforms, from early web services, to cloud computing, and more recently multi-agent systems. Max was a key OSS leader in the development of Cloud Foundry and Knative — the Kubernetes serverless platform. His expertise in distributed systems is backed by over 100 published papers and 20 patents; and his PhD in computer science in 2005 was focused on multi-agent systems. Max is also a retired Ironman triathlete and an award-winning photographer.

About the AI Alliance The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players.

Join the community Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord.

[AI Alliance] Introducing weave-cli - A fast CLI for vector search

Join us for day one of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture.

Date and Time Oct 15 at 9 AM Pacific

Location Virtual. Register for the Zoom.

Paved2Paradise: Scalable LiDAR Simulation for Real-World Perception

Training robust perception models for robotics and autonomy often requires massive, diverse 3D datasets. But collecting and annotating real-world LiDAR point clouds at scale is both expensive and time-consuming, especially when high-quality labels are needed. Paved2Paradise introduces a cost-effective alternative: a scalable LiDAR simulation pipeline that generates realistic, fully annotated datasets with minimal human labeling effort.

The key idea is to “factor the real world” by separately capturing background scans (e.g., fields, roads, construction sites) and object scans (e.g., vehicles, people, machinery). By intelligently combining these two sources, Paved2Paradise can synthesize a combinatorially large set of diverse training scenes. The pipeline involves four steps: (1) collecting extensive background LiDAR scans, (2) recording high-resolution scans of target objects under controlled conditions, (3) inserting objects into backgrounds with physically consistent placement and occlusion, and (4) simulating LiDAR geometry to ensure realism.

Experiments show that models trained on Paved2Paradise-generated data transfer effectively to the real world, achieving strong detection performance with far less manual annotation compared to conventional dataset collection. The approach is not only cost-efficient, but also flexible—allowing practitioners to easily expand to new object classes or domains by swapping in new background or object scans. For ML practitioners working in robotics, autonomous vehicles, or safety-critical perception, Paved2Paradise highlights a practical path toward scaling training data without scaling costs. It bridges the gap between simulation and real-world performance, enabling faster iteration and more reliable deployment of perception models.

About the Speaker

Michael A. Alcorn is a Senior Machine Learning Engineer at John Deere\, where he develops deep learning models for LiDAR and RGB perception in safety-critical\, real-time systems. He earned his Ph.D. in Computer Science from Auburn University\, with a dissertation on improving computer vision and spatiotemporal deep neural networks\, and also holds a Graduate Minor in Mathematics. Michael’s research has been cited by researchers at DeepMind\, Google\, Meta\, Microsoft\, and OpenAI\, among others\, and his (batter\|pitcher)2vec paper was a prize-winner at the 2018 MIT Sloan Sports Analytics Conference. He has also contributed machine learning code to scikit-learn and Apache Solr\, and his GitHub repositories—which have collectively received over 2\,100 stars—have served as starting points for research and production code at many different organizations.

MothBox: inexpensive, open-source, automated insect monitor

Dr. Andy Quitmeyer will talk about the design of an exciting new open source science tool, The Mothbox. The Mothbox is an award winning project for broad scale monitoring of insects for biodiversity. It's a low cost device developed in harsh Panamanian jungles which takes super high resolution photos to then automatically ID the levels of biodiversity in forests and agriculture. After thousands of insect observations and hundreds of deployments in Panama, Peru, Mexico, Ecuador, and the US, we are now developing a new, manufacturable version to share this important tool worldwide. We will discuss the development of this device in the jungles of Panama and its importance to studying biodiversity worldwide.

About the Speaker

Dr. Andy Quitmeyer designs new ways to interact with the natural world. He has worked with large organizations like Cartoon Network, IDEO, and the Smithsonian, taught as a tenure-track professor at the National University of Singapore, and even had his research turned into a (silly) television series called “Hacking the Wild,” distributed by Discovery Networks.

Now, he spends most of his time volunteering with smaller organizations, and recently founded the field-station makerspace, Digital Naturalism Laboratories. In the rainforest of Gamboa, Panama, Dinalab blends biological fieldwork and technological crafting with a community of local and international scientists, artists, engineers, and animal rehabilitators. He currently also advises students as an affiliate professor at the University of Washington.

Foundation Models for Visual AI in Agriculture

Foundation models have enabled a new way to address tasks, by benefitting from emerging capabilities in a zero-shot manner. In this talk I will discuss recent research on enabling visual AI in a zero-shot manner and via fine-tuning. Specifically, I will discuss joint work on RELOCATE, a simple training-free baseline designed to perform the challenging task of visual query localization in long videos.

To eliminate the need for task-specific training and efficiently handle long videos, RELOCATE leverages a region-based representation derived from pretrained vision models. I will also discuss joint work on enabling multi-modal large language models (MLLMs) to correctly answer prompts that require a holistic spatio-temporal understanding: MLLMs struggle to answer prompts that refer to 1) the entirety of an environment that an agent equipped with an MLLM can operate in; and simultaneously also refer to 2) recent actions that just happened and are encoded in a video clip.

However, such a holistic spatio-temporal understanding is important for agents operating in the real world. Our solution involves development of a dedicated data collection pipeline and fine-tuning of an MLLM equipped with projectors to improve both spatial understanding of an environment and temporal understanding of recent observations.

About the Speaker

Alex Schwing is an Associate Professor at the University of Illinois at Urbana-Champaign working with talented students on artificial intelligence, generative AI, and computer vision topics. He received his B.S. and diploma in Electrical Engineering and Information Technology from the Technical University of Munich in 2006 and 2008 respectively, and obtained a PhD in Computer Science from ETH Zurich in 2014. Afterwards he joined University of Toronto as a postdoctoral fellow until 2016.

His research interests are in the area of artificial intelligence, generative AI, and computer vision, where he has co-authored numerous papers on topics in scene understanding, inference and learning algorithms, deep learning, image and language processing, and generative modeling. His PhD thesis was awarded an ETH medal and his team’s research was awarded an NSF CAREER award.

Beyond the Lab: Real-World Anomaly Detection for Agricultural Computer Vision

Anomaly detection is transforming manufacturing and surveillance, but what about agriculture? Can AI actually detect plant diseases and pest damage early enough to make a difference? This talk demonstrates how anomaly detection identifies and localizes crop problems using coffee leaf health as our primary example. We'll start with the foundational theory, then examine how these models detect rust and miner damage in leaf imagery.

The session includes a comprehensive hands-on workflow using the open-source FiftyOne computer vision toolkit, covering dataset curation, patch extraction, model training, and result visualization. You'll gain both theoretical understanding of anomaly detection in computer vision and practical experience applying these techniques to agricultural challenges and other domains.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Oct 15 - Visual AI in Agriculture (Day 1)

Join us for day one of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture.

Date and Time Oct 15 at 9 AM Pacific

Location Virtual. Register for the Zoom.

Paved2Paradise: Scalable LiDAR Simulation for Real-World Perception

Training robust perception models for robotics and autonomy often requires massive, diverse 3D datasets. But collecting and annotating real-world LiDAR point clouds at scale is both expensive and time-consuming, especially when high-quality labels are needed. Paved2Paradise introduces a cost-effective alternative: a scalable LiDAR simulation pipeline that generates realistic, fully annotated datasets with minimal human labeling effort.

The key idea is to “factor the real world” by separately capturing background scans (e.g., fields, roads, construction sites) and object scans (e.g., vehicles, people, machinery). By intelligently combining these two sources, Paved2Paradise can synthesize a combinatorially large set of diverse training scenes. The pipeline involves four steps: (1) collecting extensive background LiDAR scans, (2) recording high-resolution scans of target objects under controlled conditions, (3) inserting objects into backgrounds with physically consistent placement and occlusion, and (4) simulating LiDAR geometry to ensure realism.

Experiments show that models trained on Paved2Paradise-generated data transfer effectively to the real world, achieving strong detection performance with far less manual annotation compared to conventional dataset collection. The approach is not only cost-efficient, but also flexible—allowing practitioners to easily expand to new object classes or domains by swapping in new background or object scans. For ML practitioners working in robotics, autonomous vehicles, or safety-critical perception, Paved2Paradise highlights a practical path toward scaling training data without scaling costs. It bridges the gap between simulation and real-world performance, enabling faster iteration and more reliable deployment of perception models.

About the Speaker

Michael A. Alcorn is a Senior Machine Learning Engineer at John Deere\, where he develops deep learning models for LiDAR and RGB perception in safety-critical\, real-time systems. He earned his Ph.D. in Computer Science from Auburn University\, with a dissertation on improving computer vision and spatiotemporal deep neural networks\, and also holds a Graduate Minor in Mathematics. Michael’s research has been cited by researchers at DeepMind\, Google\, Meta\, Microsoft\, and OpenAI\, among others\, and his (batter\|pitcher)2vec paper was a prize-winner at the 2018 MIT Sloan Sports Analytics Conference. He has also contributed machine learning code to scikit-learn and Apache Solr\, and his GitHub repositories—which have collectively received over 2\,100 stars—have served as starting points for research and production code at many different organizations.

MothBox: inexpensive, open-source, automated insect monitor

Dr. Andy Quitmeyer will talk about the design of an exciting new open source science tool, The Mothbox. The Mothbox is an award winning project for broad scale monitoring of insects for biodiversity. It's a low cost device developed in harsh Panamanian jungles which takes super high resolution photos to then automatically ID the levels of biodiversity in forests and agriculture. After thousands of insect observations and hundreds of deployments in Panama, Peru, Mexico, Ecuador, and the US, we are now developing a new, manufacturable version to share this important tool worldwide. We will discuss the development of this device in the jungles of Panama and its importance to studying biodiversity worldwide.

About the Speaker

Dr. Andy Quitmeyer designs new ways to interact with the natural world. He has worked with large organizations like Cartoon Network, IDEO, and the Smithsonian, taught as a tenure-track professor at the National University of Singapore, and even had his research turned into a (silly) television series called “Hacking the Wild,” distributed by Discovery Networks.

Now, he spends most of his time volunteering with smaller organizations, and recently founded the field-station makerspace, Digital Naturalism Laboratories. In the rainforest of Gamboa, Panama, Dinalab blends biological fieldwork and technological crafting with a community of local and international scientists, artists, engineers, and animal rehabilitators. He currently also advises students as an affiliate professor at the University of Washington.

Foundation Models for Visual AI in Agriculture

Foundation models have enabled a new way to address tasks, by benefitting from emerging capabilities in a zero-shot manner. In this talk I will discuss recent research on enabling visual AI in a zero-shot manner and via fine-tuning. Specifically, I will discuss joint work on RELOCATE, a simple training-free baseline designed to perform the challenging task of visual query localization in long videos.

To eliminate the need for task-specific training and efficiently handle long videos, RELOCATE leverages a region-based representation derived from pretrained vision models. I will also discuss joint work on enabling multi-modal large language models (MLLMs) to correctly answer prompts that require a holistic spatio-temporal understanding: MLLMs struggle to answer prompts that refer to 1) the entirety of an environment that an agent equipped with an MLLM can operate in; and simultaneously also refer to 2) recent actions that just happened and are encoded in a video clip.

However, such a holistic spatio-temporal understanding is important for agents operating in the real world. Our solution involves development of a dedicated data collection pipeline and fine-tuning of an MLLM equipped with projectors to improve both spatial understanding of an environment and temporal understanding of recent observations.

About the Speaker

Alex Schwing is an Associate Professor at the University of Illinois at Urbana-Champaign working with talented students on artificial intelligence, generative AI, and computer vision topics. He received his B.S. and diploma in Electrical Engineering and Information Technology from the Technical University of Munich in 2006 and 2008 respectively, and obtained a PhD in Computer Science from ETH Zurich in 2014. Afterwards he joined University of Toronto as a postdoctoral fellow until 2016.

His research interests are in the area of artificial intelligence, generative AI, and computer vision, where he has co-authored numerous papers on topics in scene understanding, inference and learning algorithms, deep learning, image and language processing, and generative modeling. His PhD thesis was awarded an ETH medal and his team’s research was awarded an NSF CAREER award.

Beyond the Lab: Real-World Anomaly Detection for Agricultural Computer Vision

Anomaly detection is transforming manufacturing and surveillance, but what about agriculture? Can AI actually detect plant diseases and pest damage early enough to make a difference? This talk demonstrates how anomaly detection identifies and localizes crop problems using coffee leaf health as our primary example. We'll start with the foundational theory, then examine how these models detect rust and miner damage in leaf imagery.

The session includes a comprehensive hands-on workflow using the open-source FiftyOne computer vision toolkit, covering dataset curation, patch extraction, model training, and result visualization. You'll gain both theoretical understanding of anomaly detection in computer vision and practical experience applying these techniques to agricultural challenges and other domains.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Oct 15 - Visual AI in Agriculture (Day 1)

Join us for day one of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture.

Date and Time Oct 15 at 9 AM Pacific

Location Virtual. Register for the Zoom.

Paved2Paradise: Scalable LiDAR Simulation for Real-World Perception

Training robust perception models for robotics and autonomy often requires massive, diverse 3D datasets. But collecting and annotating real-world LiDAR point clouds at scale is both expensive and time-consuming, especially when high-quality labels are needed. Paved2Paradise introduces a cost-effective alternative: a scalable LiDAR simulation pipeline that generates realistic, fully annotated datasets with minimal human labeling effort.

The key idea is to “factor the real world” by separately capturing background scans (e.g., fields, roads, construction sites) and object scans (e.g., vehicles, people, machinery). By intelligently combining these two sources, Paved2Paradise can synthesize a combinatorially large set of diverse training scenes. The pipeline involves four steps: (1) collecting extensive background LiDAR scans, (2) recording high-resolution scans of target objects under controlled conditions, (3) inserting objects into backgrounds with physically consistent placement and occlusion, and (4) simulating LiDAR geometry to ensure realism.

Experiments show that models trained on Paved2Paradise-generated data transfer effectively to the real world, achieving strong detection performance with far less manual annotation compared to conventional dataset collection. The approach is not only cost-efficient, but also flexible—allowing practitioners to easily expand to new object classes or domains by swapping in new background or object scans. For ML practitioners working in robotics, autonomous vehicles, or safety-critical perception, Paved2Paradise highlights a practical path toward scaling training data without scaling costs. It bridges the gap between simulation and real-world performance, enabling faster iteration and more reliable deployment of perception models.

About the Speaker

Michael A. Alcorn is a Senior Machine Learning Engineer at John Deere\, where he develops deep learning models for LiDAR and RGB perception in safety-critical\, real-time systems. He earned his Ph.D. in Computer Science from Auburn University\, with a dissertation on improving computer vision and spatiotemporal deep neural networks\, and also holds a Graduate Minor in Mathematics. Michael’s research has been cited by researchers at DeepMind\, Google\, Meta\, Microsoft\, and OpenAI\, among others\, and his (batter\|pitcher)2vec paper was a prize-winner at the 2018 MIT Sloan Sports Analytics Conference. He has also contributed machine learning code to scikit-learn and Apache Solr\, and his GitHub repositories—which have collectively received over 2\,100 stars—have served as starting points for research and production code at many different organizations.

MothBox: inexpensive, open-source, automated insect monitor

Dr. Andy Quitmeyer will talk about the design of an exciting new open source science tool, The Mothbox. The Mothbox is an award winning project for broad scale monitoring of insects for biodiversity. It's a low cost device developed in harsh Panamanian jungles which takes super high resolution photos to then automatically ID the levels of biodiversity in forests and agriculture. After thousands of insect observations and hundreds of deployments in Panama, Peru, Mexico, Ecuador, and the US, we are now developing a new, manufacturable version to share this important tool worldwide. We will discuss the development of this device in the jungles of Panama and its importance to studying biodiversity worldwide.

About the Speaker

Dr. Andy Quitmeyer designs new ways to interact with the natural world. He has worked with large organizations like Cartoon Network, IDEO, and the Smithsonian, taught as a tenure-track professor at the National University of Singapore, and even had his research turned into a (silly) television series called “Hacking the Wild,” distributed by Discovery Networks.

Now, he spends most of his time volunteering with smaller organizations, and recently founded the field-station makerspace, Digital Naturalism Laboratories. In the rainforest of Gamboa, Panama, Dinalab blends biological fieldwork and technological crafting with a community of local and international scientists, artists, engineers, and animal rehabilitators. He currently also advises students as an affiliate professor at the University of Washington.

Foundation Models for Visual AI in Agriculture

Foundation models have enabled a new way to address tasks, by benefitting from emerging capabilities in a zero-shot manner. In this talk I will discuss recent research on enabling visual AI in a zero-shot manner and via fine-tuning. Specifically, I will discuss joint work on RELOCATE, a simple training-free baseline designed to perform the challenging task of visual query localization in long videos.

To eliminate the need for task-specific training and efficiently handle long videos, RELOCATE leverages a region-based representation derived from pretrained vision models. I will also discuss joint work on enabling multi-modal large language models (MLLMs) to correctly answer prompts that require a holistic spatio-temporal understanding: MLLMs struggle to answer prompts that refer to 1) the entirety of an environment that an agent equipped with an MLLM can operate in; and simultaneously also refer to 2) recent actions that just happened and are encoded in a video clip.

However, such a holistic spatio-temporal understanding is important for agents operating in the real world. Our solution involves development of a dedicated data collection pipeline and fine-tuning of an MLLM equipped with projectors to improve both spatial understanding of an environment and temporal understanding of recent observations.

About the Speaker

Alex Schwing is an Associate Professor at the University of Illinois at Urbana-Champaign working with talented students on artificial intelligence, generative AI, and computer vision topics. He received his B.S. and diploma in Electrical Engineering and Information Technology from the Technical University of Munich in 2006 and 2008 respectively, and obtained a PhD in Computer Science from ETH Zurich in 2014. Afterwards he joined University of Toronto as a postdoctoral fellow until 2016.

His research interests are in the area of artificial intelligence, generative AI, and computer vision, where he has co-authored numerous papers on topics in scene understanding, inference and learning algorithms, deep learning, image and language processing, and generative modeling. His PhD thesis was awarded an ETH medal and his team’s research was awarded an NSF CAREER award.

Beyond the Lab: Real-World Anomaly Detection for Agricultural Computer Vision

Anomaly detection is transforming manufacturing and surveillance, but what about agriculture? Can AI actually detect plant diseases and pest damage early enough to make a difference? This talk demonstrates how anomaly detection identifies and localizes crop problems using coffee leaf health as our primary example. We'll start with the foundational theory, then examine how these models detect rust and miner damage in leaf imagery.

The session includes a comprehensive hands-on workflow using the open-source FiftyOne computer vision toolkit, covering dataset curation, patch extraction, model training, and result visualization. You'll gain both theoretical understanding of anomaly detection in computer vision and practical experience applying these techniques to agricultural challenges and other domains.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Oct 15 - Visual AI in Agriculture (Day 1)

Join us for day one of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture.

Date and Time Oct 15 at 9 AM Pacific

Location Virtual. Register for the Zoom.

Paved2Paradise: Scalable LiDAR Simulation for Real-World Perception

Training robust perception models for robotics and autonomy often requires massive, diverse 3D datasets. But collecting and annotating real-world LiDAR point clouds at scale is both expensive and time-consuming, especially when high-quality labels are needed. Paved2Paradise introduces a cost-effective alternative: a scalable LiDAR simulation pipeline that generates realistic, fully annotated datasets with minimal human labeling effort.

The key idea is to “factor the real world” by separately capturing background scans (e.g., fields, roads, construction sites) and object scans (e.g., vehicles, people, machinery). By intelligently combining these two sources, Paved2Paradise can synthesize a combinatorially large set of diverse training scenes. The pipeline involves four steps: (1) collecting extensive background LiDAR scans, (2) recording high-resolution scans of target objects under controlled conditions, (3) inserting objects into backgrounds with physically consistent placement and occlusion, and (4) simulating LiDAR geometry to ensure realism.

Experiments show that models trained on Paved2Paradise-generated data transfer effectively to the real world, achieving strong detection performance with far less manual annotation compared to conventional dataset collection. The approach is not only cost-efficient, but also flexible—allowing practitioners to easily expand to new object classes or domains by swapping in new background or object scans. For ML practitioners working in robotics, autonomous vehicles, or safety-critical perception, Paved2Paradise highlights a practical path toward scaling training data without scaling costs. It bridges the gap between simulation and real-world performance, enabling faster iteration and more reliable deployment of perception models.

About the Speaker

Michael A. Alcorn is a Senior Machine Learning Engineer at John Deere\, where he develops deep learning models for LiDAR and RGB perception in safety-critical\, real-time systems. He earned his Ph.D. in Computer Science from Auburn University\, with a dissertation on improving computer vision and spatiotemporal deep neural networks\, and also holds a Graduate Minor in Mathematics. Michael’s research has been cited by researchers at DeepMind\, Google\, Meta\, Microsoft\, and OpenAI\, among others\, and his (batter\|pitcher)2vec paper was a prize-winner at the 2018 MIT Sloan Sports Analytics Conference. He has also contributed machine learning code to scikit-learn and Apache Solr\, and his GitHub repositories—which have collectively received over 2\,100 stars—have served as starting points for research and production code at many different organizations.

MothBox: inexpensive, open-source, automated insect monitor

Dr. Andy Quitmeyer will talk about the design of an exciting new open source science tool, The Mothbox. The Mothbox is an award winning project for broad scale monitoring of insects for biodiversity. It's a low cost device developed in harsh Panamanian jungles which takes super high resolution photos to then automatically ID the levels of biodiversity in forests and agriculture. After thousands of insect observations and hundreds of deployments in Panama, Peru, Mexico, Ecuador, and the US, we are now developing a new, manufacturable version to share this important tool worldwide. We will discuss the development of this device in the jungles of Panama and its importance to studying biodiversity worldwide.

About the Speaker

Dr. Andy Quitmeyer designs new ways to interact with the natural world. He has worked with large organizations like Cartoon Network, IDEO, and the Smithsonian, taught as a tenure-track professor at the National University of Singapore, and even had his research turned into a (silly) television series called “Hacking the Wild,” distributed by Discovery Networks.

Now, he spends most of his time volunteering with smaller organizations, and recently founded the field-station makerspace, Digital Naturalism Laboratories. In the rainforest of Gamboa, Panama, Dinalab blends biological fieldwork and technological crafting with a community of local and international scientists, artists, engineers, and animal rehabilitators. He currently also advises students as an affiliate professor at the University of Washington.

Foundation Models for Visual AI in Agriculture

Foundation models have enabled a new way to address tasks, by benefitting from emerging capabilities in a zero-shot manner. In this talk I will discuss recent research on enabling visual AI in a zero-shot manner and via fine-tuning. Specifically, I will discuss joint work on RELOCATE, a simple training-free baseline designed to perform the challenging task of visual query localization in long videos.

To eliminate the need for task-specific training and efficiently handle long videos, RELOCATE leverages a region-based representation derived from pretrained vision models. I will also discuss joint work on enabling multi-modal large language models (MLLMs) to correctly answer prompts that require a holistic spatio-temporal understanding: MLLMs struggle to answer prompts that refer to 1) the entirety of an environment that an agent equipped with an MLLM can operate in; and simultaneously also refer to 2) recent actions that just happened and are encoded in a video clip.

However, such a holistic spatio-temporal understanding is important for agents operating in the real world. Our solution involves development of a dedicated data collection pipeline and fine-tuning of an MLLM equipped with projectors to improve both spatial understanding of an environment and temporal understanding of recent observations.

About the Speaker

Alex Schwing is an Associate Professor at the University of Illinois at Urbana-Champaign working with talented students on artificial intelligence, generative AI, and computer vision topics. He received his B.S. and diploma in Electrical Engineering and Information Technology from the Technical University of Munich in 2006 and 2008 respectively, and obtained a PhD in Computer Science from ETH Zurich in 2014. Afterwards he joined University of Toronto as a postdoctoral fellow until 2016.

His research interests are in the area of artificial intelligence, generative AI, and computer vision, where he has co-authored numerous papers on topics in scene understanding, inference and learning algorithms, deep learning, image and language processing, and generative modeling. His PhD thesis was awarded an ETH medal and his team’s research was awarded an NSF CAREER award.

Beyond the Lab: Real-World Anomaly Detection for Agricultural Computer Vision

Anomaly detection is transforming manufacturing and surveillance, but what about agriculture? Can AI actually detect plant diseases and pest damage early enough to make a difference? This talk demonstrates how anomaly detection identifies and localizes crop problems using coffee leaf health as our primary example. We'll start with the foundational theory, then examine how these models detect rust and miner damage in leaf imagery.

The session includes a comprehensive hands-on workflow using the open-source FiftyOne computer vision toolkit, covering dataset curation, patch extraction, model training, and result visualization. You'll gain both theoretical understanding of anomaly detection in computer vision and practical experience applying these techniques to agricultural challenges and other domains.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Oct 15 - Visual AI in Agriculture (Day 1)

Join us for day one of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture.

Date and Time Oct 15 at 9 AM Pacific

Location Virtual. Register for the Zoom.

Paved2Paradise: Scalable LiDAR Simulation for Real-World Perception

Training robust perception models for robotics and autonomy often requires massive, diverse 3D datasets. But collecting and annotating real-world LiDAR point clouds at scale is both expensive and time-consuming, especially when high-quality labels are needed. Paved2Paradise introduces a cost-effective alternative: a scalable LiDAR simulation pipeline that generates realistic, fully annotated datasets with minimal human labeling effort.

The key idea is to “factor the real world” by separately capturing background scans (e.g., fields, roads, construction sites) and object scans (e.g., vehicles, people, machinery). By intelligently combining these two sources, Paved2Paradise can synthesize a combinatorially large set of diverse training scenes. The pipeline involves four steps: (1) collecting extensive background LiDAR scans, (2) recording high-resolution scans of target objects under controlled conditions, (3) inserting objects into backgrounds with physically consistent placement and occlusion, and (4) simulating LiDAR geometry to ensure realism.

Experiments show that models trained on Paved2Paradise-generated data transfer effectively to the real world, achieving strong detection performance with far less manual annotation compared to conventional dataset collection. The approach is not only cost-efficient, but also flexible—allowing practitioners to easily expand to new object classes or domains by swapping in new background or object scans. For ML practitioners working in robotics, autonomous vehicles, or safety-critical perception, Paved2Paradise highlights a practical path toward scaling training data without scaling costs. It bridges the gap between simulation and real-world performance, enabling faster iteration and more reliable deployment of perception models.

About the Speaker

Michael A. Alcorn is a Senior Machine Learning Engineer at John Deere\, where he develops deep learning models for LiDAR and RGB perception in safety-critical\, real-time systems. He earned his Ph.D. in Computer Science from Auburn University\, with a dissertation on improving computer vision and spatiotemporal deep neural networks\, and also holds a Graduate Minor in Mathematics. Michael’s research has been cited by researchers at DeepMind\, Google\, Meta\, Microsoft\, and OpenAI\, among others\, and his (batter\|pitcher)2vec paper was a prize-winner at the 2018 MIT Sloan Sports Analytics Conference. He has also contributed machine learning code to scikit-learn and Apache Solr\, and his GitHub repositories—which have collectively received over 2\,100 stars—have served as starting points for research and production code at many different organizations.

MothBox: inexpensive, open-source, automated insect monitor

Dr. Andy Quitmeyer will talk about the design of an exciting new open source science tool, The Mothbox. The Mothbox is an award winning project for broad scale monitoring of insects for biodiversity. It's a low cost device developed in harsh Panamanian jungles which takes super high resolution photos to then automatically ID the levels of biodiversity in forests and agriculture. After thousands of insect observations and hundreds of deployments in Panama, Peru, Mexico, Ecuador, and the US, we are now developing a new, manufacturable version to share this important tool worldwide. We will discuss the development of this device in the jungles of Panama and its importance to studying biodiversity worldwide.

About the Speaker

Dr. Andy Quitmeyer designs new ways to interact with the natural world. He has worked with large organizations like Cartoon Network, IDEO, and the Smithsonian, taught as a tenure-track professor at the National University of Singapore, and even had his research turned into a (silly) television series called “Hacking the Wild,” distributed by Discovery Networks.

Now, he spends most of his time volunteering with smaller organizations, and recently founded the field-station makerspace, Digital Naturalism Laboratories. In the rainforest of Gamboa, Panama, Dinalab blends biological fieldwork and technological crafting with a community of local and international scientists, artists, engineers, and animal rehabilitators. He currently also advises students as an affiliate professor at the University of Washington.

Foundation Models for Visual AI in Agriculture

Foundation models have enabled a new way to address tasks, by benefitting from emerging capabilities in a zero-shot manner. In this talk I will discuss recent research on enabling visual AI in a zero-shot manner and via fine-tuning. Specifically, I will discuss joint work on RELOCATE, a simple training-free baseline designed to perform the challenging task of visual query localization in long videos.

To eliminate the need for task-specific training and efficiently handle long videos, RELOCATE leverages a region-based representation derived from pretrained vision models. I will also discuss joint work on enabling multi-modal large language models (MLLMs) to correctly answer prompts that require a holistic spatio-temporal understanding: MLLMs struggle to answer prompts that refer to 1) the entirety of an environment that an agent equipped with an MLLM can operate in; and simultaneously also refer to 2) recent actions that just happened and are encoded in a video clip.

However, such a holistic spatio-temporal understanding is important for agents operating in the real world. Our solution involves development of a dedicated data collection pipeline and fine-tuning of an MLLM equipped with projectors to improve both spatial understanding of an environment and temporal understanding of recent observations.

About the Speaker

Alex Schwing is an Associate Professor at the University of Illinois at Urbana-Champaign working with talented students on artificial intelligence, generative AI, and computer vision topics. He received his B.S. and diploma in Electrical Engineering and Information Technology from the Technical University of Munich in 2006 and 2008 respectively, and obtained a PhD in Computer Science from ETH Zurich in 2014. Afterwards he joined University of Toronto as a postdoctoral fellow until 2016.

His research interests are in the area of artificial intelligence, generative AI, and computer vision, where he has co-authored numerous papers on topics in scene understanding, inference and learning algorithms, deep learning, image and language processing, and generative modeling. His PhD thesis was awarded an ETH medal and his team’s research was awarded an NSF CAREER award.

Beyond the Lab: Real-World Anomaly Detection for Agricultural Computer Vision

Anomaly detection is transforming manufacturing and surveillance, but what about agriculture? Can AI actually detect plant diseases and pest damage early enough to make a difference? This talk demonstrates how anomaly detection identifies and localizes crop problems using coffee leaf health as our primary example. We'll start with the foundational theory, then examine how these models detect rust and miner damage in leaf imagery.

The session includes a comprehensive hands-on workflow using the open-source FiftyOne computer vision toolkit, covering dataset curation, patch extraction, model training, and result visualization. You'll gain both theoretical understanding of anomaly detection in computer vision and practical experience applying these techniques to agricultural challenges and other domains.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Oct 15 - Visual AI in Agriculture (Day 1)

Join us for day one of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture.

Date and Time Oct 15 at 9 AM Pacific

Location Virtual. Register for the Zoom.

Paved2Paradise: Scalable LiDAR Simulation for Real-World Perception

Training robust perception models for robotics and autonomy often requires massive, diverse 3D datasets. But collecting and annotating real-world LiDAR point clouds at scale is both expensive and time-consuming, especially when high-quality labels are needed. Paved2Paradise introduces a cost-effective alternative: a scalable LiDAR simulation pipeline that generates realistic, fully annotated datasets with minimal human labeling effort.

The key idea is to “factor the real world” by separately capturing background scans (e.g., fields, roads, construction sites) and object scans (e.g., vehicles, people, machinery). By intelligently combining these two sources, Paved2Paradise can synthesize a combinatorially large set of diverse training scenes. The pipeline involves four steps: (1) collecting extensive background LiDAR scans, (2) recording high-resolution scans of target objects under controlled conditions, (3) inserting objects into backgrounds with physically consistent placement and occlusion, and (4) simulating LiDAR geometry to ensure realism.

Experiments show that models trained on Paved2Paradise-generated data transfer effectively to the real world, achieving strong detection performance with far less manual annotation compared to conventional dataset collection. The approach is not only cost-efficient, but also flexible—allowing practitioners to easily expand to new object classes or domains by swapping in new background or object scans. For ML practitioners working in robotics, autonomous vehicles, or safety-critical perception, Paved2Paradise highlights a practical path toward scaling training data without scaling costs. It bridges the gap between simulation and real-world performance, enabling faster iteration and more reliable deployment of perception models.

About the Speaker

Michael A. Alcorn is a Senior Machine Learning Engineer at John Deere\, where he develops deep learning models for LiDAR and RGB perception in safety-critical\, real-time systems. He earned his Ph.D. in Computer Science from Auburn University\, with a dissertation on improving computer vision and spatiotemporal deep neural networks\, and also holds a Graduate Minor in Mathematics. Michael’s research has been cited by researchers at DeepMind\, Google\, Meta\, Microsoft\, and OpenAI\, among others\, and his (batter\|pitcher)2vec paper was a prize-winner at the 2018 MIT Sloan Sports Analytics Conference. He has also contributed machine learning code to scikit-learn and Apache Solr\, and his GitHub repositories—which have collectively received over 2\,100 stars—have served as starting points for research and production code at many different organizations.

MothBox: inexpensive, open-source, automated insect monitor

Dr. Andy Quitmeyer will talk about the design of an exciting new open source science tool, The Mothbox. The Mothbox is an award winning project for broad scale monitoring of insects for biodiversity. It's a low cost device developed in harsh Panamanian jungles which takes super high resolution photos to then automatically ID the levels of biodiversity in forests and agriculture. After thousands of insect observations and hundreds of deployments in Panama, Peru, Mexico, Ecuador, and the US, we are now developing a new, manufacturable version to share this important tool worldwide. We will discuss the development of this device in the jungles of Panama and its importance to studying biodiversity worldwide.

About the Speaker

Dr. Andy Quitmeyer designs new ways to interact with the natural world. He has worked with large organizations like Cartoon Network, IDEO, and the Smithsonian, taught as a tenure-track professor at the National University of Singapore, and even had his research turned into a (silly) television series called “Hacking the Wild,” distributed by Discovery Networks.

Now, he spends most of his time volunteering with smaller organizations, and recently founded the field-station makerspace, Digital Naturalism Laboratories. In the rainforest of Gamboa, Panama, Dinalab blends biological fieldwork and technological crafting with a community of local and international scientists, artists, engineers, and animal rehabilitators. He currently also advises students as an affiliate professor at the University of Washington.

Foundation Models for Visual AI in Agriculture

Foundation models have enabled a new way to address tasks, by benefitting from emerging capabilities in a zero-shot manner. In this talk I will discuss recent research on enabling visual AI in a zero-shot manner and via fine-tuning. Specifically, I will discuss joint work on RELOCATE, a simple training-free baseline designed to perform the challenging task of visual query localization in long videos.

To eliminate the need for task-specific training and efficiently handle long videos, RELOCATE leverages a region-based representation derived from pretrained vision models. I will also discuss joint work on enabling multi-modal large language models (MLLMs) to correctly answer prompts that require a holistic spatio-temporal understanding: MLLMs struggle to answer prompts that refer to 1) the entirety of an environment that an agent equipped with an MLLM can operate in; and simultaneously also refer to 2) recent actions that just happened and are encoded in a video clip.

However, such a holistic spatio-temporal understanding is important for agents operating in the real world. Our solution involves development of a dedicated data collection pipeline and fine-tuning of an MLLM equipped with projectors to improve both spatial understanding of an environment and temporal understanding of recent observations.

About the Speaker

Alex Schwing is an Associate Professor at the University of Illinois at Urbana-Champaign working with talented students on artificial intelligence, generative AI, and computer vision topics. He received his B.S. and diploma in Electrical Engineering and Information Technology from the Technical University of Munich in 2006 and 2008 respectively, and obtained a PhD in Computer Science from ETH Zurich in 2014. Afterwards he joined University of Toronto as a postdoctoral fellow until 2016.

His research interests are in the area of artificial intelligence, generative AI, and computer vision, where he has co-authored numerous papers on topics in scene understanding, inference and learning algorithms, deep learning, image and language processing, and generative modeling. His PhD thesis was awarded an ETH medal and his team’s research was awarded an NSF CAREER award.

Beyond the Lab: Real-World Anomaly Detection for Agricultural Computer Vision

Anomaly detection is transforming manufacturing and surveillance, but what about agriculture? Can AI actually detect plant diseases and pest damage early enough to make a difference? This talk demonstrates how anomaly detection identifies and localizes crop problems using coffee leaf health as our primary example. We'll start with the foundational theory, then examine how these models detect rust and miner damage in leaf imagery.

The session includes a comprehensive hands-on workflow using the open-source FiftyOne computer vision toolkit, covering dataset curation, patch extraction, model training, and result visualization. You'll gain both theoretical understanding of anomaly detection in computer vision and practical experience applying these techniques to agricultural challenges and other domains.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Oct 15 - Visual AI in Agriculture (Day 1)

Join us for day one of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture.

Date and Time Oct 15 at 9 AM Pacific

Location Virtual. Register for the Zoom.

Paved2Paradise: Scalable LiDAR Simulation for Real-World Perception

Training robust perception models for robotics and autonomy often requires massive, diverse 3D datasets. But collecting and annotating real-world LiDAR point clouds at scale is both expensive and time-consuming, especially when high-quality labels are needed. Paved2Paradise introduces a cost-effective alternative: a scalable LiDAR simulation pipeline that generates realistic, fully annotated datasets with minimal human labeling effort.

The key idea is to “factor the real world” by separately capturing background scans (e.g., fields, roads, construction sites) and object scans (e.g., vehicles, people, machinery). By intelligently combining these two sources, Paved2Paradise can synthesize a combinatorially large set of diverse training scenes. The pipeline involves four steps: (1) collecting extensive background LiDAR scans, (2) recording high-resolution scans of target objects under controlled conditions, (3) inserting objects into backgrounds with physically consistent placement and occlusion, and (4) simulating LiDAR geometry to ensure realism.

Experiments show that models trained on Paved2Paradise-generated data transfer effectively to the real world, achieving strong detection performance with far less manual annotation compared to conventional dataset collection. The approach is not only cost-efficient, but also flexible—allowing practitioners to easily expand to new object classes or domains by swapping in new background or object scans. For ML practitioners working in robotics, autonomous vehicles, or safety-critical perception, Paved2Paradise highlights a practical path toward scaling training data without scaling costs. It bridges the gap between simulation and real-world performance, enabling faster iteration and more reliable deployment of perception models.

About the Speaker

Michael A. Alcorn is a Senior Machine Learning Engineer at John Deere\, where he develops deep learning models for LiDAR and RGB perception in safety-critical\, real-time systems. He earned his Ph.D. in Computer Science from Auburn University\, with a dissertation on improving computer vision and spatiotemporal deep neural networks\, and also holds a Graduate Minor in Mathematics. Michael’s research has been cited by researchers at DeepMind\, Google\, Meta\, Microsoft\, and OpenAI\, among others\, and his (batter\|pitcher)2vec paper was a prize-winner at the 2018 MIT Sloan Sports Analytics Conference. He has also contributed machine learning code to scikit-learn and Apache Solr\, and his GitHub repositories—which have collectively received over 2\,100 stars—have served as starting points for research and production code at many different organizations.

MothBox: inexpensive, open-source, automated insect monitor

Dr. Andy Quitmeyer will talk about the design of an exciting new open source science tool, The Mothbox. The Mothbox is an award winning project for broad scale monitoring of insects for biodiversity. It's a low cost device developed in harsh Panamanian jungles which takes super high resolution photos to then automatically ID the levels of biodiversity in forests and agriculture. After thousands of insect observations and hundreds of deployments in Panama, Peru, Mexico, Ecuador, and the US, we are now developing a new, manufacturable version to share this important tool worldwide. We will discuss the development of this device in the jungles of Panama and its importance to studying biodiversity worldwide.

About the Speaker

Dr. Andy Quitmeyer designs new ways to interact with the natural world. He has worked with large organizations like Cartoon Network, IDEO, and the Smithsonian, taught as a tenure-track professor at the National University of Singapore, and even had his research turned into a (silly) television series called “Hacking the Wild,” distributed by Discovery Networks.

Now, he spends most of his time volunteering with smaller organizations, and recently founded the field-station makerspace, Digital Naturalism Laboratories. In the rainforest of Gamboa, Panama, Dinalab blends biological fieldwork and technological crafting with a community of local and international scientists, artists, engineers, and animal rehabilitators. He currently also advises students as an affiliate professor at the University of Washington.

Foundation Models for Visual AI in Agriculture

Foundation models have enabled a new way to address tasks, by benefitting from emerging capabilities in a zero-shot manner. In this talk I will discuss recent research on enabling visual AI in a zero-shot manner and via fine-tuning. Specifically, I will discuss joint work on RELOCATE, a simple training-free baseline designed to perform the challenging task of visual query localization in long videos.

To eliminate the need for task-specific training and efficiently handle long videos, RELOCATE leverages a region-based representation derived from pretrained vision models. I will also discuss joint work on enabling multi-modal large language models (MLLMs) to correctly answer prompts that require a holistic spatio-temporal understanding: MLLMs struggle to answer prompts that refer to 1) the entirety of an environment that an agent equipped with an MLLM can operate in; and simultaneously also refer to 2) recent actions that just happened and are encoded in a video clip.

However, such a holistic spatio-temporal understanding is important for agents operating in the real world. Our solution involves development of a dedicated data collection pipeline and fine-tuning of an MLLM equipped with projectors to improve both spatial understanding of an environment and temporal understanding of recent observations.

About the Speaker

Alex Schwing is an Associate Professor at the University of Illinois at Urbana-Champaign working with talented students on artificial intelligence, generative AI, and computer vision topics. He received his B.S. and diploma in Electrical Engineering and Information Technology from the Technical University of Munich in 2006 and 2008 respectively, and obtained a PhD in Computer Science from ETH Zurich in 2014. Afterwards he joined University of Toronto as a postdoctoral fellow until 2016.

His research interests are in the area of artificial intelligence, generative AI, and computer vision, where he has co-authored numerous papers on topics in scene understanding, inference and learning algorithms, deep learning, image and language processing, and generative modeling. His PhD thesis was awarded an ETH medal and his team’s research was awarded an NSF CAREER award.

Beyond the Lab: Real-World Anomaly Detection for Agricultural Computer Vision

Anomaly detection is transforming manufacturing and surveillance, but what about agriculture? Can AI actually detect plant diseases and pest damage early enough to make a difference? This talk demonstrates how anomaly detection identifies and localizes crop problems using coffee leaf health as our primary example. We'll start with the foundational theory, then examine how these models detect rust and miner damage in leaf imagery.

The session includes a comprehensive hands-on workflow using the open-source FiftyOne computer vision toolkit, covering dataset curation, patch extraction, model training, and result visualization. You'll gain both theoretical understanding of anomaly detection in computer vision and practical experience applying these techniques to agricultural challenges and other domains.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Oct 15 - Visual AI in Agriculture (Day 1)

Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link).

Description: Welcome to the AI virtual seminar series, in collaboration with Docker. Join us for deep dive tech talks on AI, GenAI, LLMs, Agentic AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.

Tech Talk: Going beyond the chatbot with event-driven agents Speakers: Michael Irwin (Principal Engineer at Docker) Abstract: Agentic systems have the ability to use models and tools to perform many different types of tasks. However, many of them are launched by a user opening a browser tab to a chatbot and entering a prompt that then initiates the action. While that's still valid, it is quite limiting. What if we can go farther? In this talk, we're going to take a real-life scenario in which PRs are opened for training/workshop repos with completed work and create an agent that will analyze the PRs to determine if it can be automatically closed. To automate the process, we'll connect the agent to GitHub webhooks. We'll walk through the steps - from model and tool selection to writing the code to final packaging. You'll be introduced to some of Docker's tooling, including the Docker Model Runner and MCP Gateway. Regardless if you're new or seasoned to AI development, there will be something for everyone!

More upcoming sessions:

Local and Global AI Community on Discord Join us on discord for local and global AI tech community:

  • Events chat: chat and connect with speakers and global and local attendees;
  • Learning AI: events, learning materials, study groups;
  • Startups: innovation, projects collaborations, founders/co-founders;
  • Jobs and Careers: job openings, post resumes, hiring managers *
AI Webinar with Docker - Building event-driven agents

Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link).

Description: Welcome to the AI virtual seminar series, in collaboration with Docker. Join us for deep dive tech talks on AI, GenAI, LLMs, Agentic AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.

Tech Talk: Going beyond the chatbot with event-driven agents Speakers: Michael Irwin (Principal Engineer at Docker) Abstract: Agentic systems have the ability to use models and tools to perform many different types of tasks. However, many of them are launched by a user opening a browser tab to a chatbot and entering a prompt that then initiates the action. While that's still valid, it is quite limiting. What if we can go farther? In this talk, we're going to take a real-life scenario in which PRs are opened for training/workshop repos with completed work and create an agent that will analyze the PRs to determine if it can be automatically closed. To automate the process, we'll connect the agent to GitHub webhooks. We'll walk through the steps - from model and tool selection to writing the code to final packaging. You'll be introduced to some of Docker's tooling, including the Docker Model Runner and MCP Gateway. Regardless if you're new or seasoned to AI development, there will be something for everyone!

More upcoming sessions:

Local and Global AI Community on Discord Join us on discord for local and global AI tech community:

  • Events chat: chat and connect with speakers and global and local attendees;
  • Learning AI: events, learning materials, study groups;
  • Startups: innovation, projects collaborations, founders/co-founders;
  • Jobs and Careers: job openings, post resumes, hiring managers *
AI Webinar with Docker - Building event-driven agents

Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link).

Description: Welcome to the AI virtual seminar series, in collaboration with Docker. Join us for deep dive tech talks on AI, GenAI, LLMs, Agentic AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.

Tech Talk: Going beyond the chatbot with event-driven agents Speakers: Michael Irwin (Principal Engineer at Docker) Abstract: Agentic systems have the ability to use models and tools to perform many different types of tasks. However, many of them are launched by a user opening a browser tab to a chatbot and entering a prompt that then initiates the action. While that's still valid, it is quite limiting. What if we can go farther? In this talk, we're going to take a real-life scenario in which PRs are opened for training/workshop repos with completed work and create an agent that will analyze the PRs to determine if it can be automatically closed. To automate the process, we'll connect the agent to GitHub webhooks. We'll walk through the steps - from model and tool selection to writing the code to final packaging. You'll be introduced to some of Docker's tooling, including the Docker Model Runner and MCP Gateway. Regardless if you're new or seasoned to AI development, there will be something for everyone!

More upcoming sessions:

Local and Global AI Community on Discord Join us on discord for local and global AI tech community:

  • Events chat: chat and connect with speakers and global and local attendees;
  • Learning AI: events, learning materials, study groups;
  • Startups: innovation, projects collaborations, founders/co-founders;
  • Jobs and Careers: job openings, post resumes, hiring managers *
AI Webinar with Docker - Building event-driven agents

Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). If you can't make to the live session, still register to receive the recordings/credits.

Description: Welcome to the AI virtual seminar series, in collaboration with Docker. Join us for deep dive tech talks on AI, GenAI, LLMs, Agentic AI, hands-on experiences on code labs, workshops, and networking with speakers & fellow developers from all over the world.

Tech Talk: Going beyond the chatbot with event-driven agents Speakers: Michael Irwin (Principal Engineer at Docker) Abstract: Agentic systems have the ability to use models and tools to perform many different types of tasks. However, many of them are launched by a user opening a browser tab to a chatbot and entering a prompt that then initiates the action. While that's still valid, it is quite limiting. What if we can go farther? In this talk, we're going to take a real-life scenario in which PRs are opened for training/workshop repos with completed work and create an agent that will analyze the PRs to determine if it can be automatically closed. To automate the process, we'll connect the agent to GitHub webhooks. We'll walk through the steps - from model and tool selection to writing the code to final packaging. You'll be introduced to some of Docker's tooling, including the Docker Model Runner and MCP Gateway. Regardless if you're new or seasoned to AI development, there will be something for everyone!

More upcoming sessions:

Docker AI Webinar (Ep 2) - Building event-driven agents