talk-data.com
People (5 results)
See all 5 →Activities & events
| Title & Speakers | Event |
|---|---|
|
July 24 - Women in AI
2025-07-24 · 16:00
Hear talks from experts on cutting-edge topics in AI, ML, and computer vision! When Jul 24, 2025 at 9 - 11 AM Pacific Where Online. Register for the Zoom Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain. About the Speaker Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models. Farming with CLIP: Foundation Models for Biodiversity and Agriculture Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows. We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry. Multi-modal AI in Medical Edge and Client Device Computing In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications. About the Speaker Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications. The Business of AI The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI. About the Speaker Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application. |
July 24 - Women in AI
|
|
July 24 - Women in AI
2025-07-24 · 16:00
Hear talks from experts on cutting-edge topics in AI, ML, and computer vision! When Jul 24, 2025 at 9 - 11 AM Pacific Where Online. Register for the Zoom Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain. About the Speaker Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models. Farming with CLIP: Foundation Models for Biodiversity and Agriculture Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows. We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry. Multi-modal AI in Medical Edge and Client Device Computing In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications. About the Speaker Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications. The Business of AI The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI. About the Speaker Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application. |
July 24 - Women in AI
|
|
July 24 - Women in AI
2025-07-24 · 16:00
Hear talks from experts on cutting-edge topics in AI, ML, and computer vision! When Jul 24, 2025 at 9 - 11 AM Pacific Where Online. Register for the Zoom Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain. About the Speaker Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models. Farming with CLIP: Foundation Models for Biodiversity and Agriculture Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows. We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry. Multi-modal AI in Medical Edge and Client Device Computing In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications. About the Speaker Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications. The Business of AI The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI. About the Speaker Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application. |
July 24 - Women in AI
|
|
July 24 - Women in AI
2025-07-24 · 16:00
Hear talks from experts on cutting-edge topics in AI, ML, and computer vision! When Jul 24, 2025 at 9 - 11 AM Pacific Where Online. Register for the Zoom Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain. About the Speaker Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models. Farming with CLIP: Foundation Models for Biodiversity and Agriculture Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows. We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry. Multi-modal AI in Medical Edge and Client Device Computing In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications. About the Speaker Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications. The Business of AI The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI. About the Speaker Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application. |
July 24 - Women in AI
|
|
July 24 - Women in AI
2025-07-24 · 16:00
Hear talks from experts on cutting-edge topics in AI, ML, and computer vision! When Jul 24, 2025 at 9 - 11 AM Pacific Where Online. Register for the Zoom Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain. About the Speaker Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models. Farming with CLIP: Foundation Models for Biodiversity and Agriculture Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows. We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry. Multi-modal AI in Medical Edge and Client Device Computing In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications. About the Speaker Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications. The Business of AI The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI. About the Speaker Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application. |
July 24 - Women in AI
|
|
July 24 - Women in AI
2025-07-24 · 16:00
Hear talks from experts on cutting-edge topics in AI, ML, and computer vision! When Jul 24, 2025 at 9 - 11 AM Pacific Where Online. Register for the Zoom Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain. About the Speaker Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models. Farming with CLIP: Foundation Models for Biodiversity and Agriculture Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows. We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry. Multi-modal AI in Medical Edge and Client Device Computing In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications. About the Speaker Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications. The Business of AI The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI. About the Speaker Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application. |
July 24 - Women in AI
|
|
July 24 - Women in AI
2025-07-24 · 16:00
Hear talks from experts on cutting-edge topics in AI, ML, and computer vision! When Jul 24, 2025 at 9 - 11 AM Pacific Where Online. Register for the Zoom Exploring Vision-Language-Action (VLA) Models: From LLMs to Embodied AI This talk will explore the evolution of foundation models, highlighting the shift from large language models (LLMs) to vision-language models (VLMs), and now to vision-language-action (VLA) models. We'll dive into the emerging field of robot instruction following—what it means, and how recent research is shaping its future. I will present insights from my 2024 work on natural language-based robot instruction following and connect it to more recent advancements driving progress in this domain. About the Speaker Shreya Sharma is a Research Engineer at Reality Labs, Meta, where she works on photorealistic human avatars for AR/VR applications. She holds a bachelor’s degree in Computer Science from IIT Delhi and a master’s in Robotics from Carnegie Mellon University. Shreya is also a member of the inaugural 2023 cohort of the Quad Fellowship. Her research interests lie at the intersection of robotics and vision foundation models. Farming with CLIP: Foundation Models for Biodiversity and Agriculture Using open-source tools, we will explore the power and limitations of foundation models in agriculture and biodiversity applications. Leveraging the BIOTROVE dataset. The largest publicly accessible biodiversity dataset curated from iNaturalist, we will showcase real-world use cases powered by vision-language models trained on 40 million captioned images. We focus on understanding zero-shot capabilities, taxonomy-aware evaluation, and data-centric curation workflows. We will demonstrate how to visualize, filter, evaluate, and augment data at scale. This session includes practical walkthroughs on embedding visualization with CLIP, dataset slicing by taxonomic hierarchy, identification of model failure modes, and building fine-tuned pest and crop monitoring models. Attendees will gain insights into how to apply multi-modal foundation models for critical challenges in agriculture, like ecosystem monitoring in farming. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. During her PhD and Postdoc research, she deployed multiple low-cost, smart edge & IoT computing technologies, such as farmers, that can be operated without expertise in computer vision systems. The central objective of Paula’s research has been to develop intelligent systems/machines that can understand and recreate the visual world around us to solve real-world needs, such as those in the agricultural industry. Multi-modal AI in Medical Edge and Client Device Computing In this live demo, we explore the transformative potential of multi-modal AI in medical edge and client device computing, focusing on real-time inference on a local AI PC. Attendees will witness how users can upload medical images, such as X-Rays, and ask questions about the images to the AI model. Inference is executed locally on Intel's integrated GPU and NPU using OpenVINO, enabling developers without deep AI experience to create generative AI applications. About the Speaker Helena Klosterman is an AI Engineer at Intel, based in the Netherlands, Helena enables organizations to unlock the potential of AI with OpenVINO, Intel's AI inference runtime. She is passionate about democratizing AI, developer experience, and bridging the gap between complex AI technology and practical applications. The Business of AI The talk will focus on the importance of clearly defining a specific problem and a use case, how to quantify the potential benefits of an AI solution in terms of measurable outcomes, evaluating technical feasibility in terms of technical challenges and limitations of implementing an AI solution, and envisioning the future of enterprise AI. About the Speaker Milica Cvetkovic is an AI engineer and consultant driving the development and deployment of production-ready AI systems for diverse organizations. Her expertise spans custom machine learning, generative AI, and AI operationalization. With degrees in mathematics and statistics, she possesses a decade of experience in education and edtech, including curriculum design and machine learning instruction for technical and non-technical audiences. Prior to Google, Milica held a data scientist role in biotechnology and has a proven track record of advising startups, demonstrating a deep understanding of AI's practical application. |
July 24 - Women in AI
|
|
Building Production-Ready AI Agents With Couchbase and Nebius AI
2025-07-02 · 09:00
Join Couchbase and Nebius AI for an interactive session exploring how to design, build, and scale AI agents that are ready for production. Whether you're just getting started with AI agents or looking to enhance your current workflows, this event will walk you through the core concepts, critical components, and real-world best practices needed for success. When you RSVP, check back before the event for the link to join. What You’ll Learn:
Whether you're a developer, data architect, or innovation leader, this session will provide actionable insights and hands-on examples to accelerate your AI journey. Who Should Attend: AI/ML engineers, solution architects, data platform teams, and product leaders exploring intelligent automation and generative AI capabilities. Speakers:
|
Building Production-Ready AI Agents With Couchbase and Nebius AI
|
|
Building Production-Ready AI Agents With Couchbase and Nebius AI
2025-07-02 · 09:00
Join Couchbase and Nebius AI for an interactive session exploring how to design, build, and scale AI agents that are ready for production. Whether you're just getting started with AI agents or looking to enhance your current workflows, this event will walk you through the core concepts, critical components, and real-world best practices needed for success. When you RSVP, check back before the event for the link to join. What You’ll Learn:
Whether you're a developer, data architect, or innovation leader, this session will provide actionable insights and hands-on examples to accelerate your AI journey. Who Should Attend: AI/ML engineers, solution architects, data platform teams, and product leaders exploring intelligent automation and generative AI capabilities. Speakers:
|
Building Production-Ready AI Agents With Couchbase and Nebius AI
|
|
Building Production-Ready AI Agents With Couchbase and Nebius AI
2025-07-02 · 09:00
Join Couchbase and Nebius AI for an interactive session exploring how to design, build, and scale AI agents that are ready for production. Whether you're just getting started with AI agents or looking to enhance your current workflows, this event will walk you through the core concepts, critical components, and real-world best practices needed for success. When you RSVP, check back before the event for the link to join. What You’ll Learn:
Whether you're a developer, data architect, or innovation leader, this session will provide actionable insights and hands-on examples to accelerate your AI journey. Who Should Attend: AI/ML engineers, solution architects, data platform teams, and product leaders exploring intelligent automation and generative AI capabilities. Speakers:
|
Building Production-Ready AI Agents With Couchbase and Nebius AI
|
|
Building Production-Ready AI Agents With Couchbase and Nebius AI
2025-07-02 · 09:00
Join Couchbase and Nebius AI for an interactive session exploring how to design, build, and scale AI agents that are ready for production. Whether you're just getting started with AI agents or looking to enhance your current workflows, this event will walk you through the core concepts, critical components, and real-world best practices needed for success. When you RSVP, check back before the event for the link to join. What You’ll Learn:
Whether you're a developer, data architect, or innovation leader, this session will provide actionable insights and hands-on examples to accelerate your AI journey. Who Should Attend: AI/ML engineers, solution architects, data platform teams, and product leaders exploring intelligent automation and generative AI capabilities. Speakers:
|
Building Production-Ready AI Agents With Couchbase and Nebius AI
|
|
#226 Creating Custom LLMs with Vincent Granville, Founder, CEO & Chief Al Scientist at GenAltechLab.com
2024-07-15 · 10:00
Vincent Granville
– Founder, CEO & Chief AI Scientist
@ GenAltechLab.com
,
Richie
– host
@ DataCamp
Despite GPT, Claude, Gemini, LLama and the other host of LLMs that we have access to, a variety of organizations are still exploring their options when it comes to custom LLMs. Logging in to ChatGPT is easy enough, and so is creating a 'custom' openAI GPT, but what does it take to create a truly custom LLM? When and why might this be useful, and will it be worth the effort? Vincent Granville is a pioneer in the AI and machine learning space, he is Co-Founder of Data Science Central, Founder of MLTechniques.com, former VC-funded executive, author, and patent owner. Vincent’s corporate experience includes Visa, Wells Fargo, eBay, NBC, Microsoft, and CNET. He is also a former post-doc at Cambridge University and the National Institute of Statistical Sciences. Vincent has published in the Journal of Number Theory, Journal of the Royal Statistical Society, and IEEE Transactions on Pattern Analysis and Machine Intelligence. He is the author of multiple books, including “Synthetic Data and Generative AI”. In the episode, Richie and Vincent explore why you might want to create a custom LLM including issues with standard LLMs and benefits of custom LLMs, the development and features of custom LLMs, architecture and technical details, corporate use cases, technical innovations, ethics and legal considerations, and much more. Links Mentioned in the Show: Read Articles by VincentSynthetic Data and Generative AI by Vincent GranvilleConnect with Vincent on Linkedin[Course] Developing LLM Applications with LangChainRelated Episode: The Power of Vector Databases and Semantic Search with Elan Dekel, VP of Product at PineconeRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business |
DataFramed |
|
AI Meetup (February): AI and Generative AI and LLMs
2024-02-22 · 18:00
*** RSVP: https://www.aicamp.ai/event/eventdetails/W2024022210 (Due to limited room capacity, you must pre-register at the link for admission). Welcome to the monthly in-person AI meetup in London. Join us for deep dive tech talks on AI, GenAI, LLMs and machine learning, food/drink, networking with speakers and fellow developers. Agenda: * 6:00pm\~7:00pm: Checkin, Food/drink and Networking * 7:00pm\~9:00pm: Tech talks and Q&A * 9:00pm: Open discussion and Mixer Tech Talk: Deploy self-hosted open-source AI solutions Speaker: Dmitri Evseev @Arbitration City Abstract: I will share practical insights from my journey from law firm partner to AI startup founder, focusing on deploying self-hosted, open-source AI solutions in the legal sector and beyond. I will discuss the benefits of self-hosting over third-party APIs, the challenges of implementing these systems for production use, and methods to optimise GPU usage with open-source tools. The talk will also cover approaches to integrate containerised architectures and encryption for secure, scalable AI deployment, aiming to assist the LLMOps community and others exploring self-hosted AI and retrieval-augmented generation (RAG). Tech Talk: Falcon OS - An open source LLM Operating System Speaker: Heiko Hotz (Google) Abstract: In this talk I will introduce the Falcon OS project, a collaboration with the Technology Innovation Institute and Weights & Biases. Falcon OS is a new operating system project centered around the open-source Falcon 40B LLM. It aims to simplify complex tasks through natural language, bridging the gap between users and computers. This talk will explore its potential to transform AI applications and what it takes for an LLM to be able to reason and act, a key capability for such a system. Tech Talk: Navigating LLM Deployment: Tips, Tricks, and Techniques Speaker: Meryem Arik (TitanML) Abstract: Self-hosted Language Models are going to power the next generation of applications in critical industries like financial services, healthcare, and defence. Self-hosting LLMs, as opposed to using API-based models, comes with its own host of challenges - as well as needing to solve business problems, engineers need to wrestle with the intricacies of model inference, deployment and infrastructure. In this talk we are going to discuss the best practices in model optimisation, serving and monitoring - with practical tips and real case-studies. Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics Sponsors: We are actively seeking sponsors to support AI developers community. Whether it is by offering venue spaces, providing food, or cash sponsorship. Sponsors will have the chance to speak at the meetups, receive prominent recognition, and gain exposure to our extensive membership base of 10,000+ local or 300K+ developers worldwide. Community on Slack/Discord - Event chat: chat and connect with speakers and attendees - Sharing blogs\, events\, job openings\, projects collaborations Join Slack (search and join the #london channel) \| Join Discord |
AI Meetup (February): AI and Generative AI and LLMs
|
|
Exploring the Power of LLMs and Generative AI
2023-08-24 · 15:30
Welcome to another PyData Stockholm meetup! 🌟 We're thrilled to kick off our post-summer meetup with a special focus on Large Language Models (LLMs) hosted by Google Cloud. Large Language Models (LLMs) have created a significant buzz in the AI community, captivating researchers and industry professionals alike. LLMs have the ability to generate highly coherent and contextually relevant text in response to a prompt (or query), sparking immense excitement and exploration in various applications, ranging from content generation to virtual assistants. In this meetup, we will have two talks that will give insights to LLMs in general and how to personalize LLMs with a feature store. You'll also learn about Google's advancements in developing LLMs and how they can be harnessed by consumers and enterprises alike. Join us for an exciting event featuring two thought-provoking talks on Large Language Models (LLMs) and their applications! ❗Please note that you will need to register through the above link in order to confirm your seat at the event. Agenda 17:30 - 18:00: Doors open 18:00 - 18:10: Welcome 18:10 - 18:40: Personalized LLMs with a Feature Store 18:40 - 19:10: Pizza & Beers 19:10 - 19:40: Large Language models and Generative AI at Google 19:40 - 21:00: Networking --- Presentations Personalized LLMs with a Feature Store Jim Dowling - CEO & Co-Founder, Hopsworks Large Language Models (LLMs) provide a model of the world, through a model of language. In this talk, we will walk through how to personalize a LLM using prompt-engineering with a feature store. The feature store will provide personalized history and context information for LLMs. Speaker Bio: Jim Dowling is CEO of Hopsworks and an Associate Professor at KTH Royal Institute of Technology. He is lead architect of the open-source Hopsworks Feature Store platform. He is the organizer of the annual Feature Store Summit and the Feature Store for ML community at featurestore.org. Large Language models and Generative AI at Google Zoe Tang - Customer Engineer / GenAI Specialist, Google Cloud Sweden Zoe will talk about Google's journey in developing Large Language Models and what is offered to consumers and enterprises. How large language models can be used in multiple modalities and solve different types of problems. And how they can be used in Google Cloud. Speaker Bio: Zoe is an AI + Cloud enthusiast who believes by heart that AI and ML can fundamentally change the way we live. She has 10+ years experience in the IT industry and works at Google Cloud as GenAI Specialist, primarily focusing on GenAI engagements for Sweden and Nordics customers. In her daily work, she meets with organizations to discuss how GenAI can be implemented to help them accelerate their business. --- About the event Date: August 24th, 17:30 - 21:00 Location: Google’s Office - Kungsbron 2, 111 22 Stockholm Directions: 5-7 minutes walk from T-Centralen or Hötorget stations. Tickets: Sign up required. Anyone who is not on the list will not get in. Capacity: Space is limited to 90 participants. If you are signed up but unable to attend, please let us know. Food and drinks: Pizza and drinks will be provided Questions: Please contact the meetup organizers. --- Code of Conduct The NumFOCUS Code of Conduct applies to this event; please familiarize yourself with it before attending. If you have any questions or concerns regarding the Code of Conduct, please contact the organizers. |
Exploring the Power of LLMs and Generative AI
|
|
Berlin AWS Group Meetup - August 2023
2023-08-15 · 16:30
Dear community, we are pleased to announce the next meetup on the August 15th hosted by Babbel GmbH. This time you will have an even bigger chance to connect with your peers as Babbel has provided a large space for us and we can have more participants, so don’t miss that chance! The evening 18:30 - Warming up and networking chat 18:45 - Welcome talk 19:00 - 19:40 - Olalekan Elesin // Zero to One with LLMs on AWS This talk explores the utilization of Large Language Models (LLMs) and Amazon Web Services (AWS) to unlock new possibilities in Generative AI. The session will cover the basics introduction to LLMs and their applications, introduces AWS's ML Services such as Amazon SageMaker JumpStart and the latest offering from AWS, Amazon Bedrock. We will finally walk through how to deploy LLMs Amazon SageMaker. Attendees will gain a working understanding to deploy LLMs on AWS, and how to get started immediately. About: Olalekan has a decade of experience building data and AI products across 2 continents and 5 markets. He created AI Platform 1.0 at Scout24 and leads data platform and product teams at HRS Group. Lekan is also an AWS Machine Learning Community Hero in Germany and maintains open source projects in his free time. Description follows. 19:40 - 20:00 - Short break with snacks and drinks 20:00 - 20:40 - Mahavir Teraiya // Deconstructing the Data Mesh The concept of a Data Mesh has gained significant attention in recent years as a paradigm shift in data architecture. This talk aims to deconstruct the Data Mesh, exploring its fundamental principles, benefits, and challenges. We will delve into the decentralized data ownership and domain-oriented architecture, discussing how these concepts enable scalability and flexibility in data management. Attendees will gain a comprehensive understanding of the Data Mesh and its implications for modern data-driven organizations. About: Mahavir is a Solutions Architect at AWS, specializing in collaborating with digital native businesses to help them continuously innovate using the power of the cloud. 20:40 - 21:20 - Kimberly Schmitt et al. // Collaborative Engineering Driven Data Product Development on AWS Delivering data as a product is an outcome that many companies work to realize. Kimberly and her team are working towards making this a reality through their creation of a new B2B-based data architecture. They will share the identified requirements and challenges, their incremental delivery process, and efforts to craft this fully programmable product based on AWS services. The team continues to develop the data and the product itself, but does so with greater confidence and oversight that comes from their assumption of increased responsibility and courage to take some risks along the way. About: Babbel professionals Kimberly Schmitt and Yaniv Hamo, along with Omar Moussa from Netlight, will give a presentation on the topic. 21:20 - 21:30 Closing Announcements ======================================================================== Additional Information Would you like to host AWS UG MeetUp at your company? Register here Would you like to speak at AWS UG MeetUp? Submit your talk here |
Berlin AWS Group Meetup - August 2023
|