talk-data.com
People (1 result)
Activities & events
| Title & Speakers | Event |
|---|---|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
June 27 - Visual AI in Healthcare
2025-06-27 · 16:00
Join us for the third of several virtual events focused on the latest research, datasets and models at the intersection of visual AI and healthcare. When June 27 at 9 AM Pacific Where Online. Register for the Zoom! MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders We present MedVAE, a family of six generalizable 2D and 3D variational autoencoders trained on over one million images from 19 open-source medical imaging datasets using a novel two-stage training strategy. MedVAE downsizes high-dimensional medical images into compact latent representations, reducing storage by up to 512× and accelerating downstream tasks by up to 70× while preserving clinically relevant features. We demonstrate across 20 evaluation tasks that these latent representations can replace high-resolution images in computer-aided diagnosis pipelines without compromising performance. MedVAE is open-source with a streamlined finetuning pipeline and inference engine, enabling scalable model development in resource-constrained medical imaging settings. About the Speakers Ashwin Kumar is a PhD Candidate in Biomedical Physics at Stanford University, advised by Akshay Chaudhari and Greg Zaharchuk. He focuses on developing deep learning methodologies to advance medical image acquisition and analysis. Maya Varma is a PhD student in computer science at Stanford University. Her research focuses on the development of artificial intelligence methods for addressing healthcare challenges, with a particular focus on medical imaging applications. Leveraging Foundation Models for Pathology: Progress and Pitfalls How do you train ML models on pathology slides that are thousands of times larger than standard images? Foundation models offer a breakthrough approach to these gigapixel-scale challenges. This talk explores how self-supervised foundation models trained on broad histopathology datasets are transforming computational pathology. We’ll examine their progress in handling weakly-supervised learning, managing tissue preparation variations, and enabling rapid prototyping with minimal labeled examples. However, significant challenges remain: increasing computational demands, the potential for bias, and questions about generalizability across diverse populations. This talk will offer a balanced perspective to help separate foundation model hype from genuine clinical value. About the Speaker Heather D. Couture is a consultant and founder of Pixel Scientia Labs, where she partners with mission-driven founders and R&D teams to support applications of computer vision for people and planetary health. She has a PhD in Computer Science and has published in top-tier computer vision and medical imaging venues. She hosts the Impact AI Podcast and writes regularly on LinkedIn, for her newsletter Computer Vision Insights, and for a variety of other publications. LesionLocator: Zero-Shot Universal Tumor Segmentation and Tracking in 3D Whole-Body Imaging Recent advances in promptable segmentation have transformed medical imaging workflows, yet most existing models are constrained to static 2D or 3D applications. This talk presents LesionLocator, the first end-to-end framework for universal 4D lesion segmentation and tracking using dense spatial prompts. The system enables zero-shot tumor analysis across whole-body 3D scans and multiple timepoints, propagating a single user prompt through longitudinal follow-ups to segment and track lesion progression. Trained on over 23,000 annotated scans and supplemented with a synthetic time-series dataset, LesionLocator achieves human-level performance in segmentation and outperforms state-of-the-art baselines in longitudinal tracking tasks. The presentation also highlights advances in 3D interactive segmentation, including our open-set tool nnInteractive, showing how spatial prompting can scale from user-guided interaction to clinical-grade automation. About the Speaker Maximilian Rokussis is a PhD scholar at the German Cancer Research Center (DKFZ), working in the Division of Medical Image Computing under Klaus Maier-Hein. He focuses on 3D multimodal and multi-timepoint segmentation with spatial and text prompts. With several MICCAI challenge wins and first-author publications at CVPR and MICCAI, he co-leads the Helmholtz Medical Foundation Model initiative and develops AI solutions at the interface of research and clinical radiology. LLMs for Smarter Diagnosis: Unlocking the Future of AI in Healthcare Large Language Models are rapidly transforming the healthcare landscape. In this talk, I will explore how LLMs like GPT-4 and DeepSeek-R1 are being used to support disease diagnosis, predict chronic conditions, and assist medical professionals without relying on sensitive patient data. Drawing from my published research and real-world applications, I’ll discuss the technical challenges, ethical considerations, and the future potential of integrating LLMs in clinical settings. The talk will offer valuable insights for developers, researchers, and healthcare innovators interested in applying AI responsibly and effectively. About the Speaker Gaurav K Gupta graduated from Youngstown State University, Bachelor’s in Computer Science and Mathematics. |
June 27 - Visual AI in Healthcare
|
|
June 27 - Visual AI in Healthcare
2025-06-27 · 16:00
Join us for the third of several virtual events focused on the latest research, datasets and models at the intersection of visual AI and healthcare. When June 27 at 9 AM Pacific Where Online. Register for the Zoom! MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders We present MedVAE, a family of six generalizable 2D and 3D variational autoencoders trained on over one million images from 19 open-source medical imaging datasets using a novel two-stage training strategy. MedVAE downsizes high-dimensional medical images into compact latent representations, reducing storage by up to 512× and accelerating downstream tasks by up to 70× while preserving clinically relevant features. We demonstrate across 20 evaluation tasks that these latent representations can replace high-resolution images in computer-aided diagnosis pipelines without compromising performance. MedVAE is open-source with a streamlined finetuning pipeline and inference engine, enabling scalable model development in resource-constrained medical imaging settings. About the Speakers Ashwin Kumar is a PhD Candidate in Biomedical Physics at Stanford University, advised by Akshay Chaudhari and Greg Zaharchuk. He focuses on developing deep learning methodologies to advance medical image acquisition and analysis. Maya Varma is a PhD student in computer science at Stanford University. Her research focuses on the development of artificial intelligence methods for addressing healthcare challenges, with a particular focus on medical imaging applications. Leveraging Foundation Models for Pathology: Progress and Pitfalls How do you train ML models on pathology slides that are thousands of times larger than standard images? Foundation models offer a breakthrough approach to these gigapixel-scale challenges. This talk explores how self-supervised foundation models trained on broad histopathology datasets are transforming computational pathology. We’ll examine their progress in handling weakly-supervised learning, managing tissue preparation variations, and enabling rapid prototyping with minimal labeled examples. However, significant challenges remain: increasing computational demands, the potential for bias, and questions about generalizability across diverse populations. This talk will offer a balanced perspective to help separate foundation model hype from genuine clinical value. About the Speaker Heather D. Couture is a consultant and founder of Pixel Scientia Labs, where she partners with mission-driven founders and R&D teams to support applications of computer vision for people and planetary health. She has a PhD in Computer Science and has published in top-tier computer vision and medical imaging venues. She hosts the Impact AI Podcast and writes regularly on LinkedIn, for her newsletter Computer Vision Insights, and for a variety of other publications. LesionLocator: Zero-Shot Universal Tumor Segmentation and Tracking in 3D Whole-Body Imaging Recent advances in promptable segmentation have transformed medical imaging workflows, yet most existing models are constrained to static 2D or 3D applications. This talk presents LesionLocator, the first end-to-end framework for universal 4D lesion segmentation and tracking using dense spatial prompts. The system enables zero-shot tumor analysis across whole-body 3D scans and multiple timepoints, propagating a single user prompt through longitudinal follow-ups to segment and track lesion progression. Trained on over 23,000 annotated scans and supplemented with a synthetic time-series dataset, LesionLocator achieves human-level performance in segmentation and outperforms state-of-the-art baselines in longitudinal tracking tasks. The presentation also highlights advances in 3D interactive segmentation, including our open-set tool nnInteractive, showing how spatial prompting can scale from user-guided interaction to clinical-grade automation. About the Speaker Maximilian Rokussis is a PhD scholar at the German Cancer Research Center (DKFZ), working in the Division of Medical Image Computing under Klaus Maier-Hein. He focuses on 3D multimodal and multi-timepoint segmentation with spatial and text prompts. With several MICCAI challenge wins and first-author publications at CVPR and MICCAI, he co-leads the Helmholtz Medical Foundation Model initiative and develops AI solutions at the interface of research and clinical radiology. LLMs for Smarter Diagnosis: Unlocking the Future of AI in Healthcare Large Language Models are rapidly transforming the healthcare landscape. In this talk, I will explore how LLMs like GPT-4 and DeepSeek-R1 are being used to support disease diagnosis, predict chronic conditions, and assist medical professionals without relying on sensitive patient data. Drawing from my published research and real-world applications, I’ll discuss the technical challenges, ethical considerations, and the future potential of integrating LLMs in clinical settings. The talk will offer valuable insights for developers, researchers, and healthcare innovators interested in applying AI responsibly and effectively. About the Speaker Gaurav K Gupta graduated from Youngstown State University, Bachelor’s in Computer Science and Mathematics. |
June 27 - Visual AI in Healthcare
|
|
June 27 - Visual AI in Healthcare
2025-06-27 · 16:00
Join us for the third of several virtual events focused on the latest research, datasets and models at the intersection of visual AI and healthcare. When June 27 at 9 AM Pacific Where Online. Register for the Zoom! MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders We present MedVAE, a family of six generalizable 2D and 3D variational autoencoders trained on over one million images from 19 open-source medical imaging datasets using a novel two-stage training strategy. MedVAE downsizes high-dimensional medical images into compact latent representations, reducing storage by up to 512× and accelerating downstream tasks by up to 70× while preserving clinically relevant features. We demonstrate across 20 evaluation tasks that these latent representations can replace high-resolution images in computer-aided diagnosis pipelines without compromising performance. MedVAE is open-source with a streamlined finetuning pipeline and inference engine, enabling scalable model development in resource-constrained medical imaging settings. About the Speakers Ashwin Kumar is a PhD Candidate in Biomedical Physics at Stanford University, advised by Akshay Chaudhari and Greg Zaharchuk. He focuses on developing deep learning methodologies to advance medical image acquisition and analysis. Maya Varma is a PhD student in computer science at Stanford University. Her research focuses on the development of artificial intelligence methods for addressing healthcare challenges, with a particular focus on medical imaging applications. Leveraging Foundation Models for Pathology: Progress and Pitfalls How do you train ML models on pathology slides that are thousands of times larger than standard images? Foundation models offer a breakthrough approach to these gigapixel-scale challenges. This talk explores how self-supervised foundation models trained on broad histopathology datasets are transforming computational pathology. We’ll examine their progress in handling weakly-supervised learning, managing tissue preparation variations, and enabling rapid prototyping with minimal labeled examples. However, significant challenges remain: increasing computational demands, the potential for bias, and questions about generalizability across diverse populations. This talk will offer a balanced perspective to help separate foundation model hype from genuine clinical value. About the Speaker Heather D. Couture is a consultant and founder of Pixel Scientia Labs, where she partners with mission-driven founders and R&D teams to support applications of computer vision for people and planetary health. She has a PhD in Computer Science and has published in top-tier computer vision and medical imaging venues. She hosts the Impact AI Podcast and writes regularly on LinkedIn, for her newsletter Computer Vision Insights, and for a variety of other publications. LesionLocator: Zero-Shot Universal Tumor Segmentation and Tracking in 3D Whole-Body Imaging Recent advances in promptable segmentation have transformed medical imaging workflows, yet most existing models are constrained to static 2D or 3D applications. This talk presents LesionLocator, the first end-to-end framework for universal 4D lesion segmentation and tracking using dense spatial prompts. The system enables zero-shot tumor analysis across whole-body 3D scans and multiple timepoints, propagating a single user prompt through longitudinal follow-ups to segment and track lesion progression. Trained on over 23,000 annotated scans and supplemented with a synthetic time-series dataset, LesionLocator achieves human-level performance in segmentation and outperforms state-of-the-art baselines in longitudinal tracking tasks. The presentation also highlights advances in 3D interactive segmentation, including our open-set tool nnInteractive, showing how spatial prompting can scale from user-guided interaction to clinical-grade automation. About the Speaker Maximilian Rokussis is a PhD scholar at the German Cancer Research Center (DKFZ), working in the Division of Medical Image Computing under Klaus Maier-Hein. He focuses on 3D multimodal and multi-timepoint segmentation with spatial and text prompts. With several MICCAI challenge wins and first-author publications at CVPR and MICCAI, he co-leads the Helmholtz Medical Foundation Model initiative and develops AI solutions at the interface of research and clinical radiology. LLMs for Smarter Diagnosis: Unlocking the Future of AI in Healthcare Large Language Models are rapidly transforming the healthcare landscape. In this talk, I will explore how LLMs like GPT-4 and DeepSeek-R1 are being used to support disease diagnosis, predict chronic conditions, and assist medical professionals without relying on sensitive patient data. Drawing from my published research and real-world applications, I’ll discuss the technical challenges, ethical considerations, and the future potential of integrating LLMs in clinical settings. The talk will offer valuable insights for developers, researchers, and healthcare innovators interested in applying AI responsibly and effectively. About the Speaker Gaurav K Gupta graduated from Youngstown State University, Bachelor’s in Computer Science and Mathematics. |
June 27 - Visual AI in Healthcare
|
|
June 27 - Visual AI in Healthcare
2025-06-27 · 16:00
Join us for the third of several virtual events focused on the latest research, datasets and models at the intersection of visual AI and healthcare. When June 27 at 9 AM Pacific Where Online. Register for the Zoom! MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders We present MedVAE, a family of six generalizable 2D and 3D variational autoencoders trained on over one million images from 19 open-source medical imaging datasets using a novel two-stage training strategy. MedVAE downsizes high-dimensional medical images into compact latent representations, reducing storage by up to 512× and accelerating downstream tasks by up to 70× while preserving clinically relevant features. We demonstrate across 20 evaluation tasks that these latent representations can replace high-resolution images in computer-aided diagnosis pipelines without compromising performance. MedVAE is open-source with a streamlined finetuning pipeline and inference engine, enabling scalable model development in resource-constrained medical imaging settings. About the Speakers Ashwin Kumar is a PhD Candidate in Biomedical Physics at Stanford University, advised by Akshay Chaudhari and Greg Zaharchuk. He focuses on developing deep learning methodologies to advance medical image acquisition and analysis. Maya Varma is a PhD student in computer science at Stanford University. Her research focuses on the development of artificial intelligence methods for addressing healthcare challenges, with a particular focus on medical imaging applications. Leveraging Foundation Models for Pathology: Progress and Pitfalls How do you train ML models on pathology slides that are thousands of times larger than standard images? Foundation models offer a breakthrough approach to these gigapixel-scale challenges. This talk explores how self-supervised foundation models trained on broad histopathology datasets are transforming computational pathology. We’ll examine their progress in handling weakly-supervised learning, managing tissue preparation variations, and enabling rapid prototyping with minimal labeled examples. However, significant challenges remain: increasing computational demands, the potential for bias, and questions about generalizability across diverse populations. This talk will offer a balanced perspective to help separate foundation model hype from genuine clinical value. About the Speaker Heather D. Couture is a consultant and founder of Pixel Scientia Labs, where she partners with mission-driven founders and R&D teams to support applications of computer vision for people and planetary health. She has a PhD in Computer Science and has published in top-tier computer vision and medical imaging venues. She hosts the Impact AI Podcast and writes regularly on LinkedIn, for her newsletter Computer Vision Insights, and for a variety of other publications. LesionLocator: Zero-Shot Universal Tumor Segmentation and Tracking in 3D Whole-Body Imaging Recent advances in promptable segmentation have transformed medical imaging workflows, yet most existing models are constrained to static 2D or 3D applications. This talk presents LesionLocator, the first end-to-end framework for universal 4D lesion segmentation and tracking using dense spatial prompts. The system enables zero-shot tumor analysis across whole-body 3D scans and multiple timepoints, propagating a single user prompt through longitudinal follow-ups to segment and track lesion progression. Trained on over 23,000 annotated scans and supplemented with a synthetic time-series dataset, LesionLocator achieves human-level performance in segmentation and outperforms state-of-the-art baselines in longitudinal tracking tasks. The presentation also highlights advances in 3D interactive segmentation, including our open-set tool nnInteractive, showing how spatial prompting can scale from user-guided interaction to clinical-grade automation. About the Speaker Maximilian Rokussis is a PhD scholar at the German Cancer Research Center (DKFZ), working in the Division of Medical Image Computing under Klaus Maier-Hein. He focuses on 3D multimodal and multi-timepoint segmentation with spatial and text prompts. With several MICCAI challenge wins and first-author publications at CVPR and MICCAI, he co-leads the Helmholtz Medical Foundation Model initiative and develops AI solutions at the interface of research and clinical radiology. LLMs for Smarter Diagnosis: Unlocking the Future of AI in Healthcare Large Language Models are rapidly transforming the healthcare landscape. In this talk, I will explore how LLMs like GPT-4 and DeepSeek-R1 are being used to support disease diagnosis, predict chronic conditions, and assist medical professionals without relying on sensitive patient data. Drawing from my published research and real-world applications, I’ll discuss the technical challenges, ethical considerations, and the future potential of integrating LLMs in clinical settings. The talk will offer valuable insights for developers, researchers, and healthcare innovators interested in applying AI responsibly and effectively. About the Speaker Gaurav K Gupta graduated from Youngstown State University, Bachelor’s in Computer Science and Mathematics. |
June 27 - Visual AI in Healthcare
|
|
June 27 - Visual AI in Healthcare
2025-06-27 · 16:00
Join us for the third of several virtual events focused on the latest research, datasets and models at the intersection of visual AI and healthcare. When June 27 at 9 AM Pacific Where Online. Register for the Zoom! MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders We present MedVAE, a family of six generalizable 2D and 3D variational autoencoders trained on over one million images from 19 open-source medical imaging datasets using a novel two-stage training strategy. MedVAE downsizes high-dimensional medical images into compact latent representations, reducing storage by up to 512× and accelerating downstream tasks by up to 70× while preserving clinically relevant features. We demonstrate across 20 evaluation tasks that these latent representations can replace high-resolution images in computer-aided diagnosis pipelines without compromising performance. MedVAE is open-source with a streamlined finetuning pipeline and inference engine, enabling scalable model development in resource-constrained medical imaging settings. About the Speakers Ashwin Kumar is a PhD Candidate in Biomedical Physics at Stanford University, advised by Akshay Chaudhari and Greg Zaharchuk. He focuses on developing deep learning methodologies to advance medical image acquisition and analysis. Maya Varma is a PhD student in computer science at Stanford University. Her research focuses on the development of artificial intelligence methods for addressing healthcare challenges, with a particular focus on medical imaging applications. Leveraging Foundation Models for Pathology: Progress and Pitfalls How do you train ML models on pathology slides that are thousands of times larger than standard images? Foundation models offer a breakthrough approach to these gigapixel-scale challenges. This talk explores how self-supervised foundation models trained on broad histopathology datasets are transforming computational pathology. We’ll examine their progress in handling weakly-supervised learning, managing tissue preparation variations, and enabling rapid prototyping with minimal labeled examples. However, significant challenges remain: increasing computational demands, the potential for bias, and questions about generalizability across diverse populations. This talk will offer a balanced perspective to help separate foundation model hype from genuine clinical value. About the Speaker Heather D. Couture is a consultant and founder of Pixel Scientia Labs, where she partners with mission-driven founders and R&D teams to support applications of computer vision for people and planetary health. She has a PhD in Computer Science and has published in top-tier computer vision and medical imaging venues. She hosts the Impact AI Podcast and writes regularly on LinkedIn, for her newsletter Computer Vision Insights, and for a variety of other publications. LesionLocator: Zero-Shot Universal Tumor Segmentation and Tracking in 3D Whole-Body Imaging Recent advances in promptable segmentation have transformed medical imaging workflows, yet most existing models are constrained to static 2D or 3D applications. This talk presents LesionLocator, the first end-to-end framework for universal 4D lesion segmentation and tracking using dense spatial prompts. The system enables zero-shot tumor analysis across whole-body 3D scans and multiple timepoints, propagating a single user prompt through longitudinal follow-ups to segment and track lesion progression. Trained on over 23,000 annotated scans and supplemented with a synthetic time-series dataset, LesionLocator achieves human-level performance in segmentation and outperforms state-of-the-art baselines in longitudinal tracking tasks. The presentation also highlights advances in 3D interactive segmentation, including our open-set tool nnInteractive, showing how spatial prompting can scale from user-guided interaction to clinical-grade automation. About the Speaker Maximilian Rokussis is a PhD scholar at the German Cancer Research Center (DKFZ), working in the Division of Medical Image Computing under Klaus Maier-Hein. He focuses on 3D multimodal and multi-timepoint segmentation with spatial and text prompts. With several MICCAI challenge wins and first-author publications at CVPR and MICCAI, he co-leads the Helmholtz Medical Foundation Model initiative and develops AI solutions at the interface of research and clinical radiology. LLMs for Smarter Diagnosis: Unlocking the Future of AI in Healthcare Large Language Models are rapidly transforming the healthcare landscape. In this talk, I will explore how LLMs like GPT-4 and DeepSeek-R1 are being used to support disease diagnosis, predict chronic conditions, and assist medical professionals without relying on sensitive patient data. Drawing from my published research and real-world applications, I’ll discuss the technical challenges, ethical considerations, and the future potential of integrating LLMs in clinical settings. The talk will offer valuable insights for developers, researchers, and healthcare innovators interested in applying AI responsibly and effectively. About the Speaker Gaurav K Gupta graduated from Youngstown State University, Bachelor’s in Computer Science and Mathematics. |
June 27 - Visual AI in Healthcare
|
|
June 27 - Visual AI in Healthcare
2025-06-27 · 16:00
Join us for the third of several virtual events focused on the latest research, datasets and models at the intersection of visual AI and healthcare. When June 27 at 9 AM Pacific Where Online. Register for the Zoom! MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders We present MedVAE, a family of six generalizable 2D and 3D variational autoencoders trained on over one million images from 19 open-source medical imaging datasets using a novel two-stage training strategy. MedVAE downsizes high-dimensional medical images into compact latent representations, reducing storage by up to 512× and accelerating downstream tasks by up to 70× while preserving clinically relevant features. We demonstrate across 20 evaluation tasks that these latent representations can replace high-resolution images in computer-aided diagnosis pipelines without compromising performance. MedVAE is open-source with a streamlined finetuning pipeline and inference engine, enabling scalable model development in resource-constrained medical imaging settings. About the Speakers Ashwin Kumar is a PhD Candidate in Biomedical Physics at Stanford University, advised by Akshay Chaudhari and Greg Zaharchuk. He focuses on developing deep learning methodologies to advance medical image acquisition and analysis. Maya Varma is a PhD student in computer science at Stanford University. Her research focuses on the development of artificial intelligence methods for addressing healthcare challenges, with a particular focus on medical imaging applications. Leveraging Foundation Models for Pathology: Progress and Pitfalls How do you train ML models on pathology slides that are thousands of times larger than standard images? Foundation models offer a breakthrough approach to these gigapixel-scale challenges. This talk explores how self-supervised foundation models trained on broad histopathology datasets are transforming computational pathology. We’ll examine their progress in handling weakly-supervised learning, managing tissue preparation variations, and enabling rapid prototyping with minimal labeled examples. However, significant challenges remain: increasing computational demands, the potential for bias, and questions about generalizability across diverse populations. This talk will offer a balanced perspective to help separate foundation model hype from genuine clinical value. About the Speaker Heather D. Couture is a consultant and founder of Pixel Scientia Labs, where she partners with mission-driven founders and R&D teams to support applications of computer vision for people and planetary health. She has a PhD in Computer Science and has published in top-tier computer vision and medical imaging venues. She hosts the Impact AI Podcast and writes regularly on LinkedIn, for her newsletter Computer Vision Insights, and for a variety of other publications. LesionLocator: Zero-Shot Universal Tumor Segmentation and Tracking in 3D Whole-Body Imaging Recent advances in promptable segmentation have transformed medical imaging workflows, yet most existing models are constrained to static 2D or 3D applications. This talk presents LesionLocator, the first end-to-end framework for universal 4D lesion segmentation and tracking using dense spatial prompts. The system enables zero-shot tumor analysis across whole-body 3D scans and multiple timepoints, propagating a single user prompt through longitudinal follow-ups to segment and track lesion progression. Trained on over 23,000 annotated scans and supplemented with a synthetic time-series dataset, LesionLocator achieves human-level performance in segmentation and outperforms state-of-the-art baselines in longitudinal tracking tasks. The presentation also highlights advances in 3D interactive segmentation, including our open-set tool nnInteractive, showing how spatial prompting can scale from user-guided interaction to clinical-grade automation. About the Speaker Maximilian Rokussis is a PhD scholar at the German Cancer Research Center (DKFZ), working in the Division of Medical Image Computing under Klaus Maier-Hein. He focuses on 3D multimodal and multi-timepoint segmentation with spatial and text prompts. With several MICCAI challenge wins and first-author publications at CVPR and MICCAI, he co-leads the Helmholtz Medical Foundation Model initiative and develops AI solutions at the interface of research and clinical radiology. LLMs for Smarter Diagnosis: Unlocking the Future of AI in Healthcare Large Language Models are rapidly transforming the healthcare landscape. In this talk, I will explore how LLMs like GPT-4 and DeepSeek-R1 are being used to support disease diagnosis, predict chronic conditions, and assist medical professionals without relying on sensitive patient data. Drawing from my published research and real-world applications, I’ll discuss the technical challenges, ethical considerations, and the future potential of integrating LLMs in clinical settings. The talk will offer valuable insights for developers, researchers, and healthcare innovators interested in applying AI responsibly and effectively. About the Speaker Gaurav K Gupta graduated from Youngstown State University, Bachelor’s in Computer Science and Mathematics. |
June 27 - Visual AI in Healthcare
|
|
June 27 - Visual AI in Healthcare
2025-06-27 · 16:00
Join us for the third of several virtual events focused on the latest research, datasets and models at the intersection of visual AI and healthcare. When June 27 at 9 AM Pacific Where Online. Register for the Zoom! MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders We present MedVAE, a family of six generalizable 2D and 3D variational autoencoders trained on over one million images from 19 open-source medical imaging datasets using a novel two-stage training strategy. MedVAE downsizes high-dimensional medical images into compact latent representations, reducing storage by up to 512× and accelerating downstream tasks by up to 70× while preserving clinically relevant features. We demonstrate across 20 evaluation tasks that these latent representations can replace high-resolution images in computer-aided diagnosis pipelines without compromising performance. MedVAE is open-source with a streamlined finetuning pipeline and inference engine, enabling scalable model development in resource-constrained medical imaging settings. About the Speakers Ashwin Kumar is a PhD Candidate in Biomedical Physics at Stanford University, advised by Akshay Chaudhari and Greg Zaharchuk. He focuses on developing deep learning methodologies to advance medical image acquisition and analysis. Maya Varma is a PhD student in computer science at Stanford University. Her research focuses on the development of artificial intelligence methods for addressing healthcare challenges, with a particular focus on medical imaging applications. Leveraging Foundation Models for Pathology: Progress and Pitfalls How do you train ML models on pathology slides that are thousands of times larger than standard images? Foundation models offer a breakthrough approach to these gigapixel-scale challenges. This talk explores how self-supervised foundation models trained on broad histopathology datasets are transforming computational pathology. We’ll examine their progress in handling weakly-supervised learning, managing tissue preparation variations, and enabling rapid prototyping with minimal labeled examples. However, significant challenges remain: increasing computational demands, the potential for bias, and questions about generalizability across diverse populations. This talk will offer a balanced perspective to help separate foundation model hype from genuine clinical value. About the Speaker Heather D. Couture is a consultant and founder of Pixel Scientia Labs, where she partners with mission-driven founders and R&D teams to support applications of computer vision for people and planetary health. She has a PhD in Computer Science and has published in top-tier computer vision and medical imaging venues. She hosts the Impact AI Podcast and writes regularly on LinkedIn, for her newsletter Computer Vision Insights, and for a variety of other publications. LesionLocator: Zero-Shot Universal Tumor Segmentation and Tracking in 3D Whole-Body Imaging Recent advances in promptable segmentation have transformed medical imaging workflows, yet most existing models are constrained to static 2D or 3D applications. This talk presents LesionLocator, the first end-to-end framework for universal 4D lesion segmentation and tracking using dense spatial prompts. The system enables zero-shot tumor analysis across whole-body 3D scans and multiple timepoints, propagating a single user prompt through longitudinal follow-ups to segment and track lesion progression. Trained on over 23,000 annotated scans and supplemented with a synthetic time-series dataset, LesionLocator achieves human-level performance in segmentation and outperforms state-of-the-art baselines in longitudinal tracking tasks. The presentation also highlights advances in 3D interactive segmentation, including our open-set tool nnInteractive, showing how spatial prompting can scale from user-guided interaction to clinical-grade automation. About the Speaker Maximilian Rokussis is a PhD scholar at the German Cancer Research Center (DKFZ), working in the Division of Medical Image Computing under Klaus Maier-Hein. He focuses on 3D multimodal and multi-timepoint segmentation with spatial and text prompts. With several MICCAI challenge wins and first-author publications at CVPR and MICCAI, he co-leads the Helmholtz Medical Foundation Model initiative and develops AI solutions at the interface of research and clinical radiology. LLMs for Smarter Diagnosis: Unlocking the Future of AI in Healthcare Large Language Models are rapidly transforming the healthcare landscape. In this talk, I will explore how LLMs like GPT-4 and DeepSeek-R1 are being used to support disease diagnosis, predict chronic conditions, and assist medical professionals without relying on sensitive patient data. Drawing from my published research and real-world applications, I’ll discuss the technical challenges, ethical considerations, and the future potential of integrating LLMs in clinical settings. The talk will offer valuable insights for developers, researchers, and healthcare innovators interested in applying AI responsibly and effectively. About the Speaker Gaurav K Gupta graduated from Youngstown State University, Bachelor’s in Computer Science and Mathematics. |
June 27 - Visual AI in Healthcare
|
|
21th Eindhoven Data Community meetup: Scaled AI in enterprises
2025-05-13 · 15:00
This 21th Eindhoven Data Community meetup will feature two sessions focused on leveraging AI technologies for performance evaluation and operational efficiency. The first session will discuss the challenges of evaluating Large Language Models (LLMs) at scale, highlighting the use of LLM-as-a-Judge systems and the implementation of a scalable evaluation framework using Vertex AI and Gemini on Google Cloud. The second session will introduce Menu AI at Just Eat Takeaway, a solution designed to automate the transcription of restaurant menus, significantly reducing the time and effort required for this task. You will learn about the cloud architecture and multimodal models used for parsing menu images and extracting structured data. Overall, the event will showcase innovative strategies for enhancing AI application evaluations and improving operational processes. Location: Skybar, Microlab Eindhoven, Kastanjelaan 400, Eindhoven. Evaluating LLM applications at scale As LLM deployments grow, evaluating their performance effectively and efficiently becomes of critical performance. Standard metrics struggle with the heterogeneity of LLM outputs, and manual expert review doesn't scale. LLM-as-a-Judge systems promise automation but require careful implementation to handle specific jargon and ensure alignment with human standards.This session dives into practical solutions for these evaluation challenges, grounded in a large-scale project evaluating a customer-facing chatbot handling over one million conversations per year. We will explore strategies for overcoming both conceptual hurdles (like judge alignment and context awareness) and technical bottlenecks (including cost optimization, data throughput, and robust API interaction). Learn how we leveraged the power of Vertex AI and Gemini on Google Cloud to implement a scalable, reliable, and insightful LLM evaluation framework. Decoding Culinary Complexity: Transforming Menu Transcription Menu AI at Just Eat Takeaway Step into the shoes of a team faced with the herculean task of transcribing restaurant menus by hand. Each menu, a labyrinth of culinary offerings, can take an excruciating 2-4 hours to decode, demanding unwavering attention to detail amidst a jungle of artistic fonts and intricate designs. Now multiply that by 1700, the number of menus that land on the desks of JET every month only in UK. This is the very pain point that Menu AI promises to alleviate, setting the scene for a transformative solution we're eager to share with you. Join us for an insightful session on how JET integrates restaurant menus into its platform. Our talk will delve into the intricacies of Menu AI, from cloud architecture to the parsing of pictures of restaurant menus and how it and augments the productivity of humans in the loop. We will also get under the hood on how to leverage the power of multimodal models and their vision capabilities on parsing photos, describing menus in structured data, and the importance of mapping relationships among menu items. Moreover, we'll share insights into the significant cost savings realized by the project in Customer Services Operation costs. Program
Sander van Donkelaar \| AI/ML Engineer at Xebia Data Sander is an AI/ML Engineer skilled in building AI products and platforms. Experienced across diverse industries, Sander has a proven track record of delivering innovative solutions At the core of Sander's expertise is the ability to translate complex business problems into tangible AI-powered solutions, driving efficiency, innovation, and data-driven decision-making for organizations. Caio Benatti Moretti \| AI consultant at Xebia Data Caio holds a PhD in Computer Science and has been acting as a DS/MLE both in academia and industry since 2014. Currently working as an AI Consultant at Xebia, he created SlackGPT and is particularly keen on neural networks in its many forms and applications. His enthusiasm even led him to make a neural network fit inside a business card. Apart from practical experience, Caio has been giving seminars and trainings on how to empower businesses with LLMs from use cases to technical tooling. He is focused on how LLMs can augment human productivity and hence helping businesses to leverage novel technologies to achieve their goals. |
21th Eindhoven Data Community meetup: Scaled AI in enterprises
|
|
Eindhoven Data Community meetup 18 - Datenna
2024-10-29 · 16:00
We’re excited to announce our upcoming meetup in collaboration with Datenna, a pioneering scale-up based in Eindhoven. This event promises to be a deep dive into the innovative use of data and technology, showcasing cutting-edge applications that are shaping the future of open-source intelligence. And all this brought to you by the CTO and Founder of Datenna; Edward Brinkmann. In addition, Shu and Remi are sharing what they learned from building a tool to annotate LLM outputs. Sounds cool and interesting right? Datenna is our host this time and will open the doors of their office on October 29th, see you then! How Datenna built a digital twin of China using graphs and GenAI How do you create a detailed and reliable digital twin of one of the largest economies in the world? How do you ensure that the data being collected from open sources is trustworthy? How do you handle conflicting pieces of information, merge entities across data sources, and ensure every conclusion is explainable and traceable back to the source? These are some of the challenges Datenna tackles daily in its mission to provide the best open-source intelligence to governments worldwide for economic and national security purposes. Discover how Datenna leverages graph databases and GenAI technology to build an open-source intelligence engine that continuously collects information on over 100 million entities in China, mapping all these entities and their relationships. Learn how Datenna, a scale-up founded in Eindhoven, has used these novel technologies to gain a competitive edge globally and became a world leader in techno-economic intelligence on China. What we've learnt from building a tool to annotate LLM outputs LLMs can take files, audio, and video as input and generate summaries, answer questions, and extract information. With the Gemini family of models capable of supporting up to 1 million tokens in their context window, users can feed a PDF of hundreds of pages into these models and output only the results they care about. However, the outputs may contain errors. In this talk, the presenters will share their learnings from building a tool that enables the manual annotation and evaluation of these models' outputs based on a collection of models chosen by the users. They find the comparison results interesting and would like to share them with the audience. Program
About: Edward Brinkmann As the CTO and co-founder of Datenna, Edward has guided the company through various stages of growth, transforming it from a technology start-up to a thriving scale-up. His first role as CTO was as founding engineer, implementing the first versions of the intelligence platform, and later as engineering manager and lead architect whilst expanding the development team. With a background in software engineering, data engineering, and systems architecture, Edward has a broad interest in technology, especially in translating business needs into the most suitable technical solutions. Before co-founding Datenna, Edward enjoyed working on end-to-end projects in the capacity of lead developer and full-stack engineer, gaining experience across various business domains, use cases, and technologies. About: Shu Zhao Shu has an MSc. in Artificial Intelligence, and part of her thesis was published in ECCV 2022 about artistic pose analysis based on computer vision. She also participated in AI Song Contest 2021 by producing one song by training an RNN-based language model. Before she worked for Xebia data, Shu worked in various roles from large banks to smaller fintechs to e-commerce where she accumulated a wide span of technical skills to come to a sustainable solution. About Remi Baar Fifteen years ago, at just 17, Remi launched his own software development company, quickly focusing on the exciting fields of artificial intelligence and data science. Since then, he has held various data science roles across a diverse range of organizations—from startups to multinational corporations, and from government agencies to airlines. His unique blend of software engineering expertise and data science has garnered him recognition and appreciation in each of these positions. Currently, Remi is a valued member of the Xebia team, collaborating with fellow experts to enhance their collective skills and push the limits of AI. With a passion for knowledge sharing, Remi eagerly shares his latest insights. |
Eindhoven Data Community meetup 18 - Datenna
|
|
EP05: LLMs in Computer Vision
2023-11-21 · 17:30
In this session, you'll discover how LLMs (Large Language Models) have revolutionized the Azure AI Vision service, unlocking unprecedented scenarios, including the remarkable world of image generation. What You'll Learn: The Power of LLMs in Computer Vision: Explore how Large Language Models are transforming the landscape of computer vision, pushing the boundaries of accuracy and capability in tasks such as image classification, object detection, and image captioning. Enhancements to Azure AI Vision: Dive deep into the Microsoft Florence foundational model and see how it has elevated the Azure AI Vision service. Learn about new scenarios it unlocks and improvements to existing capabilities. Image Generation with DALL·E2: Venture into the fascinating world of image generation tasks enabled by the DALL·E2 model, available through Azure OpenAI service. Witness the creative potential of this cutting-edge technology. Prepare to be inspired as we showcase practical examples of embedding these capabilities into your solutions via REST APIs. Join us on this visual journey into the future of Computer Vision, where words are just the beginning. Check out our Microsoft Learn collection of curated free training to advance your skills: https://aka.ms/YourAI_Idea |
EP05: LLMs in Computer Vision
|
|
EP05: LLMs in Computer Vision
2023-11-21 · 17:30
In this session, you'll discover how LLMs (Large Language Models) have revolutionized the Azure AI Vision service, unlocking unprecedented scenarios, including the remarkable world of image generation. What You'll Learn: The Power of LLMs in Computer Vision: Explore how Large Language Models are transforming the landscape of computer vision, pushing the boundaries of accuracy and capability in tasks such as image classification, object detection, and image captioning. Enhancements to Azure AI Vision: Dive deep into the Microsoft Florence foundational model and see how it has elevated the Azure AI Vision service. Learn about new scenarios it unlocks and improvements to existing capabilities. Image Generation with DALL·E2: Venture into the fascinating world of image generation tasks enabled by the DALL·E2 model, available through Azure OpenAI service. Witness the creative potential of this cutting-edge technology. Prepare to be inspired as we showcase practical examples of embedding these capabilities into your solutions via REST APIs. Join us on this visual journey into the future of Computer Vision, where words are just the beginning. Check out our Microsoft Learn collection of curated free training to advance your skills: https://aka.ms/YourAI_Idea |
EP05: LLMs in Computer Vision
|
|
Transforming Computer Vision with LLMs
2023-11-15 · 20:00
Large language models (LLMs) are revolutionizing the way we interact with computers and the world around us. However, in order to truly understand the world, LLM-powered agents need to be able to see. While vision-language models present a promising pathway to such multimodal understanding, it turns out that text-only LLMs can achieve remarkable success with prompting and tool use. In this talk, Jacob Marks will give an overview of key LLM-centered projects that are transforming the field of computer vision, such as VisProg, ViperGPT, VoxelGPT, and HuggingGPT. He will also discuss his first-hand experience of building VoxelGPT, shedding light on the challenges and lessons learned, as well as a practitioner’s insights into domain-specific prompt engineering. He will conclude with his thoughts on the future of LLMs in computer vision. This event is open to all and is especially relevant for researchers and practitioners interested in computer vision, generative AI, LLMs, and machine learning. RSVP now for an enlightening session! |
Transforming Computer Vision with LLMs
|
|
Transforming Computer Vision with LLMs
2023-11-15 · 20:00
Large language models (LLMs) are revolutionizing the way we interact with computers and the world around us. However, in order to truly understand the world, LLM-powered agents need to be able to see. While vision-language models present a promising pathway to such multimodal understanding, it turns out that text-only LLMs can achieve remarkable success with prompting and tool use. In this talk, Jacob Marks will give an overview of key LLM-centered projects that are transforming the field of computer vision, such as VisProg, ViperGPT, VoxelGPT, and HuggingGPT. He will also discuss his first-hand experience of building VoxelGPT, shedding light on the challenges and lessons learned, as well as a practitioner’s insights into domain-specific prompt engineering. He will conclude with his thoughts on the future of LLMs in computer vision. This event is open to all and is especially relevant for researchers and practitioners interested in computer vision, generative AI, LLMs, and machine learning. RSVP now for an enlightening session! |
Transforming Computer Vision with LLMs
|