talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (14 results)

See all 14 →

Companies (1 result)

Generative AI 1 speaker
Senior Technical Specialist

Activities & events

Title & Speakers Event

RSVP here - https://hubs.li/Q03W2dHz0

​Join Google Developer Relations Engineer Toni Klopfenstein for a deep dive into the Agent Development Kit (ADK)—Google's powerful, open-source, and modular framework for building sophisticated AI agents.

​In today's fast-evolving landscape of Generative AI, developers need tools that offer control, flexibility, and scalability. This session will explore how the ADK helps you move beyond basic LLM prompts and embrace a code-first approach to agent development.

What you will learn: ​*- How to utilize ADK for robust, customizable, and debuggable AI solutions.* ​*- Strategies for building complex, multi-agent architectures and workflows.* ​*- Ways to integrate your agents with a rich ecosystem of tools and APIs.* ​*- The future opportunities that ADK unlocks for your next-generation applications.*

​Don't miss this chance to learn directly from a Google expert and future-proof your development toolkit!

Useful Links

WEBINAR "The Power of ADK: Unlocking New Opportunities"

RSVP here - https://hubs.li/Q03W2dHz0

​Join Google Developer Relations Engineer Toni Klopfenstein for a deep dive into the Agent Development Kit (ADK)—Google's powerful, open-source, and modular framework for building sophisticated AI agents.

​In today's fast-evolving landscape of Generative AI, developers need tools that offer control, flexibility, and scalability. This session will explore how the ADK helps you move beyond basic LLM prompts and embrace a code-first approach to agent development.

What you will learn: ​*- How to utilize ADK for robust, customizable, and debuggable AI solutions.* ​*- Strategies for building complex, multi-agent architectures and workflows.* ​*- Ways to integrate your agents with a rich ecosystem of tools and APIs.* ​*- The future opportunities that ADK unlocks for your next-generation applications.*

​Don't miss this chance to learn directly from a Google expert and future-proof your development toolkit!

Useful Links

WEBINAR "The Power of ADK: Unlocking New Opportunities"

RSVP here - https://hubs.li/Q03W2dHz0

​Join Google Developer Relations Engineer Toni Klopfenstein for a deep dive into the Agent Development Kit (ADK)—Google's powerful, open-source, and modular framework for building sophisticated AI agents.

​In today's fast-evolving landscape of Generative AI, developers need tools that offer control, flexibility, and scalability. This session will explore how the ADK helps you move beyond basic LLM prompts and embrace a code-first approach to agent development.

What you will learn: ​*- How to utilize ADK for robust, customizable, and debuggable AI solutions.* ​*- Strategies for building complex, multi-agent architectures and workflows.* ​*- Ways to integrate your agents with a rich ecosystem of tools and APIs.* ​*- The future opportunities that ADK unlocks for your next-generation applications.*

​Don't miss this chance to learn directly from a Google expert and future-proof your development toolkit!

Useful Links

WEBINAR "The Power of ADK: Unlocking New Opportunities"

RSVP here - https://hubs.li/Q03W2dHz0

​Join Google Developer Relations Engineer Toni Klopfenstein for a deep dive into the Agent Development Kit (ADK)—Google's powerful, open-source, and modular framework for building sophisticated AI agents.

​In today's fast-evolving landscape of Generative AI, developers need tools that offer control, flexibility, and scalability. This session will explore how the ADK helps you move beyond basic LLM prompts and embrace a code-first approach to agent development.

What you will learn: ​*- How to utilize ADK for robust, customizable, and debuggable AI solutions.* ​*- Strategies for building complex, multi-agent architectures and workflows.* ​*- Ways to integrate your agents with a rich ecosystem of tools and APIs.* ​*- The future opportunities that ADK unlocks for your next-generation applications.*

And we will have one more speaker for a 10-minute partner talk: Dustin Shammo, Confluent Sr. Solutions Engineer

With deep expertise in legacy systems, cloud computing, and analytics, Dustin empowers organizations to unlock the value of real-time data streaming. He holds a Master of Science from Boston University and is dedicated to guiding customers in building mission-critical systems that drive business innovation.

Useful Links

WEBINAR "The Power of ADK: Unlocking New Opportunities"

RSVP here - https://hubs.li/Q03W2dHz0

​Join Google Developer Relations Engineer Toni Klopfenstein for a deep dive into the Agent Development Kit (ADK)—Google's powerful, open-source, and modular framework for building sophisticated AI agents.

​In today's fast-evolving landscape of Generative AI, developers need tools that offer control, flexibility, and scalability. This session will explore how the ADK helps you move beyond basic LLM prompts and embrace a code-first approach to agent development.

What you will learn: ​*- How to utilize ADK for robust, customizable, and debuggable AI solutions.* ​*- Strategies for building complex, multi-agent architectures and workflows.* ​*- Ways to integrate your agents with a rich ecosystem of tools and APIs.* ​*- The future opportunities that ADK unlocks for your next-generation applications.*

And we will have one more speaker for a 10-minute partner talk: Dustin Shammo, Confluent Sr. Solutions Engineer

With deep expertise in legacy systems, cloud computing, and analytics, Dustin empowers organizations to unlock the value of real-time data streaming. He holds a Master of Science from Boston University and is dedicated to guiding customers in building mission-critical systems that drive business innovation.

Useful Links

WEBINAR "The Power of ADK: Unlocking New Opportunities"
GenAI Demo Day Q4 2025-11-19 · 19:00

Join us for GenAI Demo Day—an exclusive virtual event showcasing the most innovative Generative AI solutions through concise 10-minute demos.

At DSC, we value your time and have designed this event to provide impactful demos without any unnecessary fluff. Quickly identify solutions that meet your business needs and connect directly with the founders and top leaders of pioneering GenAI companies.

What You'll Learn:

1️⃣ Innovative Solutions: Discover the latest advancements in Generative AI through targeted, high-impact demos.

2️⃣ Direct Engagement: Gain valuable insights and ask questions directly to the founders and top leadership of cutting-edge GenAI companies.

3️⃣ Enhanced Capabilities: Explore how these technologies can enhance your organization’s capabilities and drive innovation.

Don't miss this opportunity to experience the best of GenAI in a focused, efficient format tailored for decision-makers like you. We understand that everyone loves discovering new solutions, but nobody likes the hard sell.

Mark your calendars and get ready to dive into the future of AI!

Register Here

GenAI Demo Day Q4
Deepti Srivastava – Founder @ Snow Leopard AI , Al Martin – WW VP Technical Sales @ IBM

Send us a text What if AI could tap into live operational data — without ETL or RAG? In this episode, Deepti Srivastava, founder of Snow Leopard, reveals how her company is transforming enterprise data access with intelligent data retrieval, semantic intelligence, and a governance-first approach. Tune in for a fresh perspective on the future of AI and the startup journey behind it.

We explore how companies are revolutionizing their data access and AI strategies. Deepti Srivastava, founder of Snow Leopard, shares her insights on bridging the gap between live operational data and generative AI — and how it’s changing the game for enterprises worldwide. We dive into Snow Leopard’s innovative approach to data retrieval, semantic intelligence, and governance-first architecture. 04:54 Meeting Deepti Srivastava 14:06 AI with No ETL, no RAG 17:11 Snow Leopard's Intelligent Data Fetching 19:00 Live Query Challenges 21:01 Snow Leopard's Secret Sauce 22:14 Latency 23:48 Schema Changes 25:02 Use Cases 26:06 Snow Leopard's Roadmap 29:16 Getting Started 33:30 The Startup Journey 34:12 A Woman in Technology 36:03 The Contrarian View🔗 LinkedIn: https://www.linkedin.com/in/thedeepti/ 🔗 Website:  https://www.snowleopard.ai/ Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

AI/ML ETL/ELT GenAI IBM RAG
Making Data Simple
GenAI Demo Day Q3 2025-09-10 · 18:00

Join us for GenAI Demo Day—an exclusive virtual event showcasing the most innovative Generative AI solutions through concise 10-minute demos.

At DSC, we value your time and have designed this event to provide impactful demos without any unnecessary fluff. Quickly identify solutions that meet your business needs and connect directly with the founders and top leaders of pioneering GenAI companies.

What You'll Learn:

1️⃣ Innovative Solutions: Discover the latest advancements in Generative AI through targeted, high-impact demos.

2️⃣ Direct Engagement: Gain valuable insights and ask questions directly to the founders and top leadership of cutting-edge GenAI companies.

3️⃣ Enhanced Capabilities: Explore how these technologies can enhance your organization’s capabilities and drive innovation.

Don't miss this opportunity to experience the best of GenAI in a focused, efficient format tailored for decision-makers like you. We understand that everyone loves discovering new solutions, but nobody likes the hard sell.

Mark your calendars and get ready to dive into the future of AI!

Register Here

GenAI Demo Day Q3

Fabric-Powered Retail AI: From Data Lakes to $Billion ROI & Personalization

Join Session Here: https://www.youtube.com/live/8_33kHuC6ho?feature=shared

The retail industry is experiencing an unprecedented AI revolution, with academic research publications exploding from just 12 articles in 2000 to over 847 articles in 2023—a staggering 21.3% compound annual growth rate that signals the transformative power of intelligent systems built on modern data platforms. This presentation reveals how leading retailers are leveraging Microsoft Fabric's unified analytics platform to process 150+ customer attributes simultaneously (versus 10-15 in traditional systems), achieving hyperpersonalization that drives 20-30% conversion rate increases and 34% higher click-through rates through context-aware recommendations. Deep learning-based recommendation systems now constitute 68% of retail AI research, with transformer-based architectures like BERT4Rec delivering 23-31% performance improvements when powered by scalable data lakehouse architectures. Drawing from comprehensive analysis across 29 industries and 112 countries, we'll explore why only 10% of companies achieve significant AI ROI—and how Microsoft Fabric's integrated data platform accelerates success for the winners. The presentation will showcase real-world case studies demonstrating inventory optimization with 30-50% forecast accuracy improvements through generative AI models processing 20-50 feature variables in Fabric's lakehouse, operational excellence with logistics firms achieving 28% average productivity gains through real-time analytics, innovation acceleration with 37% average increase in patent applications post-AI adoption, and customer experience enhancement with 75% prediction accuracy enabling real-time personalization across unified data estates. However, success requires more than technology. Research reveals that 70% of AI success stems from organizational factors, with AI leaders investing 73% more in human-AI collaboration training compared to laggards (23%). We'll decode the five critical organizational learning practices that distinguish AI winners from losers, and how Fabric's collaborative workspace enables these practices at scale. This data-driven session combines cutting-edge research insights with practical Microsoft Fabric implementation strategies, addressing both the $billion opportunities and the ethical considerations reshaping retail's future through unified analytics platforms.

Sunday Dive Into Tech With Experts - 2025

Explore AI and Machine Learning fundamentals, tools, and applications in this beginner-friendly guide. Learn to build models in Python and understand AI ethics. Key Features Covers AI fundamentals, Machine Learning, and Python model-building Provides a clear, step-by-step guide to learning AI techniques Explains ethical considerations and the future role of AI in society Book Description This book is an ideal starting point for anyone interested in Artificial Intelligence and Machine Learning. It begins with the foundational principles of AI, offering a deep dive into its history, building blocks, and the stages of development. Readers will explore key AI concepts and gradually transition to practical applications, starting with machine learning algorithms such as linear regression and k-nearest neighbors. Through step-by-step Python tutorials, the book helps readers build and implement models with hands-on experience. As the book progresses, readers will dive into advanced AI topics like deep learning, natural language processing (NLP), and generative AI. Topics such as recommender systems and computer vision demonstrate the real-world applications of AI technologies. Ethical considerations and privacy concerns are also addressed, providing insight into the societal impact of these technologies. By the end of the book, readers will have a solid understanding of both the theory and practice of AI and Machine Learning. The final chapters provide resources for continued learning, ensuring that readers can continue to grow their AI expertise beyond the book. What you will learn Understand key AI and ML concepts and how they work together Build and apply machine learning models from scratch Use Python to implement AI techniques and improve model performance Explore essential AI tools and frameworks used in the industry Learn the importance of data and data preparation in AI development Grasp the ethical considerations and the future of AI in work Who this book is for This book is ideal for beginners with no prior knowledge of AI or Machine Learning. It is tailored to those who wish to dive into these topics but are not yet familiar with the terminology or techniques. There are no prerequisites, though basic programming knowledge can be helpful. The book caters to a wide audience, from students and hobbyists to professionals seeking to transition into AI roles. Readers should be enthusiastic about learning and exploring AI applications for the future.

data ai-ml machine-learning AI/ML GenAI NLP Python
O'Reilly AI & ML Books
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI
July 24 - Women in AI 2025-07-24 · 16:00

Hear talks from experts on cutting-edge topics in AI, ML, and computer vision!

When

Jul 24, 2025 at 9 - 11 AM Pacific

Where

Online. Register for the Zoom

Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI

This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain.

About the Speaker

Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models.

Farming with CLIP: Foundation Models for Biodiversity and Agriculture

Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows.

We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry.

Multi-modal AI in Medical Edge and Client Device Computing

In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications.

About the Speaker

Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications.

The Business of AI

The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI.

About the Speaker

Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application.

July 24 - Women in AI

June is Neo4j Certification Month in the Neo4j Community, a perfect time to level up your graph skills and explore the future of intelligent applications.

Join us for a special livestream AMA with Siddhant Agarwal, Neo4j Developer Relations Community Leader and author of and author of the newly published Building Neo4j-Powered Applications with LLMs

In this session, we’ll dive into how developers can:

  • Get started with graph databases and Neo4j
  • Integrate large language models (LLMs) and generative AI into real-world apps
  • Take the next step toward Neo4j Certification

Whether you're just beginning your graph journey or ready to architect AI-powered solutions, this is your opportunity to ask questions, get expert insight, and find your path to becoming a certified graph practitioner.

AMA w/ author Siddhant Agarwal, on Building Neo4j-Powered Applications with LLMs

June is Neo4j Certification Month in the Neo4j Community, a perfect time to level up your graph skills and explore the future of intelligent applications.

Join us for a special livestream AMA with Siddhant Agarwal, Neo4j Developer Relations Community Leader and author of and author of the newly published Building Neo4j-Powered Applications with LLMs

In this session, we’ll dive into how developers can:

  • Get started with graph databases and Neo4j
  • Integrate large language models (LLMs) and generative AI into real-world apps
  • Take the next step toward Neo4j Certification

Whether you're just beginning your graph journey or ready to architect AI-powered solutions, this is your opportunity to ask questions, get expert insight, and find your path to becoming a certified graph practitioner.

AMA w/ author Siddhant Agarwal, on Building Neo4j-Powered Applications with LLMs

June is Neo4j Certification Month in the Neo4j Community, a perfect time to level up your graph skills and explore the future of intelligent applications.

Join us for a special livestream AMA with Siddhant Agarwal, Neo4j Developer Relations Community Leader and author of and author of the newly published Building Neo4j-Powered Applications with LLMs

In this session, we’ll dive into how developers can:

  • Get started with graph databases and Neo4j
  • Integrate large language models (LLMs) and generative AI into real-world apps
  • Take the next step toward Neo4j Certification

Whether you're just beginning your graph journey or ready to architect AI-powered solutions, this is your opportunity to ask questions, get expert insight, and find your path to becoming a certified graph practitioner.

AMA w/ author Siddhant Agarwal, on Building Neo4j-Powered Applications with LLMs