talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (8 results)

See all 8 →

Activities & events

Title & Speakers Event
Event AWS re:Invent 2024 2025-12-07

Amazon OpenSearch Service lets you search billions of vectors in milliseconds and with high accuracy to support semantic search and power generative AI. Learn how we're democratizing vector search and accelerating AI application development with vector index GPU-acceleration and auto-optimization on Amazon OpenSearch Service. These new features allow you to build billion-scale vector database in under an hour, and index vectors 10x faster at only a quarter of the cost, while auto-optimizing for search speed, quality and cost savings.

Learn More: More AWS events: https://go.aws/3kss9CP

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

ABOUT AWS: Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

AWSreInvent #AWSreInvent2025 #AWS

Agile/Scrum AI/ML AWS Cloud Computing GenAI Vector DB

Nova Forge, a first-of-its-kind service that offers organizations the easiest and most cost-effective way to build their own frontier models using Amazon Nova. As organizations deploy generative AI in production, they need models that embody their proprietary knowledge, understand their workflows, and meet their requirements. Learn about the shortcomings of the options for custom model development available today, and how Nova Forge is democratizing frontier model development. Hear from the Sr. Director of ML Content and Platform and Reddit on how they are leveraging Nova Forge to implement and scale AI across Reddit.

Learn more: More AWS events: https://go.aws/3kss9CP

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

ABOUT AWS: Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

AWSreInvent #AWSreInvent2025 #AWS

Agile/Scrum AI/ML AWS Cloud Computing GenAI
William Brennan @ Blue Origin

William Brennan shares how Blue Origin uses Agentic AI to accelerate space exploration and rocket development by democratizing AI adoption company-wide to reach their goal of enabling multiple rocket launches with one person.

Learn more about AWS events: https://go.aws/events

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

ABOUT AWS Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

AWSreInvent #AWSEvents

Agile/Scrum AI/ML AWS Cloud Computing

Bridging the Gap: Using Generative AI for Audience Insight & Segmentation

Seats are limited to 16 attendees. Register here to save your spot. 

https://www.snowflake.com/event/marketing-data-stack-roundtable-swt-amsterdam-2025/

This roundtable explores how generative AI (GenAI) is revolutionizing audience segmentation and insights. The discussion will focus on practical, in-the-moment applications that empower marketers and media professionals to move beyond static data analysis. We will examine how GenAI tools, like those available natively on Snowflake Cortex, can translate complex data filters into rich, narrative-driven audience descriptions. 

The conversation will also highlight how GenAI capabilities streamline workflows by allowing users to build audience segments using natural language, democratizing access to data and accelerating decision-making. The goal is to provide a clear, concise, and actionable understanding of how GenAI is bridging the gap between raw data and powerful, human-centric insights.

AI/ML GenAI Marketing Snowflake
Snowflake World Tour Amsterdam
Adnan Hodzic – Lead Engineer and GenAI Delivery Lead @ ING , Yuliia Tkachova – host @ Masthead Data

Adnan Hodzic, Lead Engineer and GenAI Delivery Lead at ING, joined Yuliia how ING successfully scaled generative AI from experimentation to enterprise production. With over 60 GenAI applications now running in production across the bank, Adnan explains ING's pragmatic approach: building internal AI platforms that balance innovation speed with regulatory compliance, treating European banking regulations as features rather than constraints, and fostering a culture where 300+ experiments can safely run while only the best reach production. He discusses the critical role of their Prompt Flow Studio in democratizing AI development, why customer success teams saw immediate productivity gains, how ING structures AI governance without killing innovation, and his perspective on the hype cycle versus real enterprise value. Adnan's blog: https://foolcontrol.org Adnan's Youtube channel: https://www.youtube.com/AdnanHodzicLinkedIn: https://linkedin.com/in/AdnanHodzicTwitter/X: https://twitter.com/fooctrl

AI/ML GenAI
Straight Data Talk

A practical guide for data scientists and engineers - Hugo Bowne-Anderson

​As AI moves from experimentation to real-world impact, the challenges are no longer just technical. They’re about design, evaluation, and collaboration. In this episode, Hugo will share his perspective on how teams and individuals can build AI responsibly, work effectively across disciplines, and keep learning as the field continues to change.

​He’ll cover:

  • ​When (and when not) to build AI agents
  • ​Using AI for coding vs. building software with LLMs
  • ​The AI software development lifecycle and escaping “PoC purgatory”
  • ​What happens to data science in the age of AI

About the Speaker

Hugo Bowne-Anderson is an independent data and AI consultant with extensive experience in the tech industry. He has advised and taught teams building AI-powered systems, including engineers from Netflix, Meta, and the U.S. Air Force. He is the host of Vanishing Gradients and High Signal, podcasts exploring developments in data science and AI.

​Previously, Hugo served as Head of Developer Relations at Outerbounds and held roles at Coiled and DataCamp, where his work in data science education reached over 6 million learners. He has taught at Yale University, Cold Spring Harbor Laboratory, and conferences like SciPy and PyCon, and is a passionate advocate for democratizing data skills and open-source tools. He also regularly teaches courses on Building LLM Applications for Data Scientists and Software Engineers.

Join our slack: https://datatalks.club/slack.html

How to Build and Evaluate AI systems in the Age of LLMs
Rohan Thakur – Director of Analytics @ Collectors

The session details how our lean central data team has achieved significant output by: Deploying the optimal tools (dbt and Lightdash) that supercharge DevEx Democratizing dbt development effectively Leveraging AI driven development Effectively using Kanban prioritization

AI/ML dbt Lightdash
dbt Coalesce 2025

This session will focus on how organizations are extracting significant business value by democratizing their data and optimizing resources through the Snowflake AI Data Cloud. The first part of the presentation will showcase how Snowflake helps customers craft compelling value stories for diverse AI use cases and strategic migrations, alongside best practices for optimizing cloud spend. The second part will feature a conversation highlighting how a leading enterprise overcame the common challenges of data silos and dashboard sprawl by simplifying processes with Snowflake AI capabilities. Attendees will learn actionable strategies for accelerating their AI journey and achieving measurable impact.

AI/ML Cloud Computing Dashboard Snowflake
Snowflake World Tour London

This project develops an enterprise-grade AI platform that automates the extraction of ESG data, regulatory compliance checks, and peer benchmarking for companies. Utilizing NLP and machine learning, the system converts unstructured sustainability reports into standardized metrics, facilitating real-time compliance monitoring and competitive intelligence across various industries. Business Impact: Targets the rapidly growing ESG software market, serving investment firms, consulting companies, and institutional investors requiring automated analysis for portfolio decisions and regulatory compliance.

NLP machine learning esg data extraction
Data Science Retreat Demo Day #43
Victory Uchenna – Enterprise Solutions Architect @ Amazon Web Services

Data is one of the most valuable assets in any organisation, but accessing and analysing it has been limited to technical experts. Business users often rely on predefined dashboards and data teams to extract insights, creating bottlenecks and slowing decision-making.

This is changing with the rise of Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG). These technologies are redefining how organisations interact with data, allowing users to ask complex questions in natural language and receive accurate, real-time insights without needing deep technical expertise.

In this session, I’ll explore how LLMs and RAG are driving true data democratisation by making analytics accessible to everyone, enabling real-time insights with AI-powered search and retrieval and overcoming traditional barriers like SQL, BI tool complexity, and rigid reporting structures.

AI/ML Analytics BI Data Analytics LLM RAG SQL
Big Data LDN 2025
Event Data Expo NL 2025 2025-09-11
Steven Nooijen @ Xebia , Stefan Bakker @ Zilveren Kruis

Generative AI and AI tools are democratizing data and blurring lines between business, data, and IT, making traditional operating models obsolete. Zilveren Kruis, the largest health insurer in the Netherlands, is modernizing by enabling self-service, implementing AI productivity suites, fostering collaboration across departments, and redefining data roles to drive rapid, compliant innovation.

AI/ML GenAI

To fully unlock the potential of AI within KPN, scaling is key. Therefore KPN focuses on 4 pillars: AI Literacy, Governance, end-to-end implementation with business, IT, data and AI, and the expansion of our technical infrastructure. Together, these elements support the democratization of AI capabilities across the organization. With the emergence of Generative AI—especially Agentic AI—broad enablement has become even more critical. In this session, KPN will share organizational opportunities and challenges related to AI adoption at scale, and how it utilizes Dataiku as the central Data Science platform to drive this transformation.

AI/ML Data Science Dataiku GenAI
Kristen Scotti – CMU/STEM librarian @ Carnegie Mellon University

Hands-on workshop guided by Kristen Scotti that explores using AI chatbots to learn coding in Python, debugging, optimizing, and understanding code, with emphasis on responsible and effective use of AI as an on-demand tutoring aid.

Python chatgpt gemini copilot
Aug Event | Python for All: Democratizing Coding Mastery with AI Chatbot Support

Dear Data Wizards,

We are looking forward to inviting all of you to our next meetup.

Topics

  • What's New - Kristian
  • BI for all: How GenAI is democratizing data in Power BI and Fabric - Roman

The session will be recorded and made available on YouTube --> https://aka.ms/FabricUGYouTube

BI for all: How GenAI is democratizing data in Power BI and Fabric With the rapid pace of announcements around Gen AI in Power BI and Microsoft Fabric, it’s easy to feel left behind. In this session, Roman will walk attendees through the latest capabilities, starting from scratch and ramping up quickly. He’ll demonstrate how to use Copilot and natural language features to boost productivity, simplify data exploration, and stay up to date. Whether you're curious about asking questions in plain English or just want a hands-on overview of what’s new, Roman will guide you through a practical tour of the Gen AI-powered experience in Power BI and Fabric

Good to know We want this group to be a safe environment that encourages open discussion, exchange of ideas and problems you may face. Therefore, we kindly ask that no members will leverage the information for unsolicited acquisitions of new customers or projects. This group builds on trust, and without it we cannot learn from each other and excel on this topic.

Want to be a presenter? We are always looking for new speaker. If you are interested and would like to show something to the Power BI Meetup Group please feel free to contact us!

Zürich - 73rd Fabric User Group [ONLINE]
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI