talk-data.com talk-data.com

Topic

computer vision

60

tagged

Activity Trend

1 peak/qtr
2020-Q1 2026-Q1

Activities

60 activities · Newest first

Building a price comparison platform requires solving multiple ML challenges at scale. This talk covers a year-long production project combining LLMs, graph algorithms, and computer vision.

We'll explore:

Orchestrating complex ML workflows with Vertex AI Pipelines Using Gemini to classify products, extract attributes, and generating titles/descriptions Connecting product variants across retailers with graph algorithms Deduplicating images using computer vision

You'll learn practical lessons from deploying these systems in production, including trade-offs and challenges encountered along the way

Join Sheamus for an in-depth webinar on the exciting intersection of artificial intelligence and robotics. This session will provide a foundational understanding of how AI is revolutionizing the field of robotics, moving beyond traditional, pre-programmed systems to create intelligent, autonomous machines. Sheamus will explore the core concepts of AI that are most relevant to robotics, including machine learning, computer vision, and natural language processing. The webinar will cover practical applications and case studies, from self-navigating drones to collaborative industrial robots. Attendees will gain insight into the challenges and opportunities in this rapidly evolving field, and learn about the key technologies and skills needed to design and build the next generation of intelligent robots. Whether you are a student, an engineer, or simply curious about the future of automation, this session will provide a comprehensive and accessible introduction.

An in-depth webinar on how AI is transforming robotics, moving beyond pre-programmed systems to autonomous machines. This session covers core AI concepts—machine learning, computer vision, and natural language processing—with practical applications and case studies from self-navigating drones to collaborative industrial robots.

Join Sheamus for an in-depth webinar on the exciting intersection of artificial intelligence and robotics. This session will provide a foundational understanding of how AI is revolutionizing the field of robotics, moving beyond traditional, pre-programmed systems to create intelligent, autonomous machines. Sheamus will explore the core concepts of AI that are most relevant to robotics, including machine learning, computer vision, and natural language processing. The webinar will cover practical applications and case studies, from self-navigating drones to collaborative industrial robots. Attendees will gain insight into the challenges and opportunities in this rapidly evolving field, and learn about the key technologies and skills needed to design and build the next generation of intelligent robots. Whether you are a student, an engineer, or simply curious about the future of automation, this session will provide a comprehensive and accessible introduction.

Join Sheamus McGovern for an in-depth webinar on the intersection of artificial intelligence and robotics. This session introduces foundational AI concepts relevant to robotics (machine learning, computer vision, and natural language processing) and explores practical applications and case studies, including self-navigating drones and collaborative industrial robots. Attendees will gain insights into challenges, opportunities, and the key technologies and skills needed to design and build the next generation of intelligent robots.

Computer vision is becoming a key enabler of smart manufacturing, from quality inspection to robotic automation. Yet traditional development of AI models often requires large datasets, specialized expertise, and significant time. In this talk, we explore how no-code computer vision platforms are changing this landscape allowing engineers, operators, and domain experts to build, train, and deploy models without deep AI backgrounds. We’ll look at real examples from manufacturing and robotics to show how faster iteration, simpler data workflows, and scalable deployment can move automation projects from concept to production.

Computer vision is becoming a key enabler of smart manufacturing, from quality inspection to robotic automation. Yet traditional development of AI models often requires large datasets, specialized expertise, and significant time. In this talk, we explore how no-code computer vision platforms are changing this landscape allowing engineers, operators, and domain experts to build, train, and deploy models without deep AI backgrounds. We’ll look at real examples from manufacturing and robotics to show how faster iteration, simpler data workflows, and scalable deployment can move automation projects from concept to production.

Computer vision is becoming a key enabler of smart manufacturing, from quality inspection to robotic automation. Yet traditional development of AI models often requires large datasets, specialized expertise, and significant time. In this talk, we explore how no-code computer vision platforms are changing this landscape allowing engineers, operators, and domain experts to build, train, and deploy models without deep AI backgrounds. We’ll look at real examples from manufacturing and robotics to show how faster iteration, simpler data workflows, and scalable deployment can move automation projects from concept to production.

Anomaly detection is one of computer vision's most exciting and essential challenges today. From spotting subtle defects in manufacturing to identifying edge cases in model behavior, it is one of computer vision's most exciting and crucial challenges. In this session, we’ll do a hands-on walkthrough using the MVTec AD dataset, showcasing real-world workflows for data curation, exploration, and model evaluation. We’ll also explore the power of embedding visualizations and similarity searches to uncover hidden patterns and surface anomalies that often go unnoticed.

This session is packed with actionable strategies to help you make sense of your data and build more robust, reliable models. Join us as we connect the dots between data, models, and real-world deployment—alongside other experts driving innovation in anomaly detection.

Anomaly detection is one of computer vision's most exciting and essential challenges today. From spotting subtle defects in manufacturing to identifying edge cases in model behavior, it is one of computer vision's most exciting and crucial challenges. In this session, we’ll do a hands-on walkthrough using the MVTec AD dataset, showcasing real-world workflows for data curation, exploration, and model evaluation. We’ll also explore the power of embedding visualizations and similarity searches to uncover hidden patterns and surface anomalies that often go unnoticed. This session is packed with actionable strategies to help you make sense of your data and build more robust, reliable models. Join us as we connect the dots between data, models, and real-world deployment—alongside other experts driving innovation in anomaly detection.

This talk explores how we generate high-performance computer vision datasets from CAD—without real-world images or manual labeling. We’ll walk through our synthetic data pipeline, including CPU-optimized defect simulation, material variation, and lighting workflows that scale to thousands of renders per part. While Blender plays a role, our focus is on how industrial data (like STEP files) and procedural generation unlock fast, flexible training sets for manufacturing QA, even on modest hardware. If you're working at the edge of 3D, automation, and vision AI—this is for you!

Many face daily challenges due to disabilities; AI promises a more accessible and inclusive world. This talk explores how AI enhances digital accessibility through technologies like AI-powered screen readers, computer vision for navigation, and AI-driven personalization for cognitive disabilities. We'll demonstrate how Google's MediaPipe specifically empowers us to significantly improve accessibility solutions. By also making AI itself accessible, we can truly revolutionize inclusion, enabling full participation for millions and fostering a more equitable society.

In hospitals, direct patient observation is limited–nurses spend only 37% of their shift engaged in patient care, and physicians average just 10 visits per hospital stay. LookDeep Health’s AI-driven platform enables continuous and passive monitoring of individual patients, and has been deployed “in the wild” for nearly 3 years. They recently published a study validating this system, titled “Continuous Patient Monitoring with AI.” This talk is a technical dive into said paper, focusing on the intersection of AI and real-world application.

In hospitals, direct patient observation is limited–nurses spend only 37% of their shift engaged in patient care, and physicians average just 10 visits per hospital stay. LookDeep Health’s AI-driven platform enables continuous and passive monitoring of individual patients, and has been deployed “in the wild” for nearly 3 years. They recently published a study validating this system, titled “Continuous Patient Monitoring with AI”. This talk is a technical dive into said paper, focusing on the intersection of AI and real-world application.

In hospitals, direct patient observation is limited–nurses spend only 37% of their shift engaged in patient care, and physicians average just 10 visits per hospital stay. LookDeep Health’s AI-driven platform enables continuous and passive monitoring of individual patients, and has been deployed “in the wild” for nearly 3 years. They recently published a study validating this system, titled “Continuous Patient Monitoring with AI”. This talk is a technical dive into said paper, focusing on the intersection of AI and real-world application.

In hospitals, direct patient observation is limited–nurses spend only 37% of their shift engaged in patient care, and physicians average just 10 visits per hospital stay. LookDeep Health’s AI-driven platform enables continuous and passive monitoring of individual patients, and has been deployed “in the wild” for nearly 3 years. They recently published a study validating this system, titled “Continuous Patient Monitoring with AI”. This talk is a technical dive into said paper, focusing on the intersection of AI and real-world application.

In hospitals, direct patient observation is limited. Nurses spend only 37% of their shift engaged in patient care, and physicians average just 10 visits per hospital stay. LookDeep Health’s AI-driven platform enables continuous and passive monitoring of individual patients, and has been deployed in the wild for nearly 3 years. They recently published a study validating this system, titled Continuous Patient Monitoring with AI. This talk is a technical dive into that paper, focusing on the intersection of AI and real-world application.

In hospitals, direct patient observation is limited–nurses spend only 37% of their shift engaged in patient care, and physicians average just 10 visits per hospital stay. LookDeep Health’s AI-driven platform enables continuous and passive monitoring of individual patients, and has been deployed “in the wild” for nearly 3 years. They recently published a study validating this system, titled “Continuous Patient Monitoring with AI”. This talk is a technical dive into said paper, focusing on the intersection of AI and real-world application.

Abstract: Our innate ability to reconstruct the 3D world around us from our eyes alone is a fundamental part of human perception. For computers, however, this task remained a significant challenge — until the advent of Neural Radiance Fields (NeRFs). Upon their introduction, NeRFs marked a paradigm shift in the field of novel view synthesis, demonstrating huge improvements in visual realism and geometric accuracy over prior works. The subsequent proliferation of NeRF variants has only expanded their capabilities, unlocking larger scenes, achieving even higher visual fidelity, and accelerating both training and inference. Nevertheless, NeRF is no longer the tool of choice for 3D reconstruction. Why? Join a researcher from the front lines as we explore NeRF’s foundations, dissect its strengths and weaknesses, see how the field has evolved, and explore the future of novel view synthesis.