talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (350 results)

See all 350 →

Activities & events

Title & Speakers Event

Are you an AI engineer, data scientist, machine learning researcher, or aspiring innovator in the world of artificial intelligence? AIConnect is your dynamic AI networking event online designed to foster real-time conversations, career growth, and collaboration opportunities — all within a vibrant and self-led virtual space.

Whether you're exploring job roles in AI, hoping to demo your latest project, or searching for your next co-developer or data scientist partner, AIConnect offers a one-of-a-kind experience to connect with the global machine learning community meetup — anytime, from anywhere.

🎥 Watch how it works in our explainer video: 👉 https://www.youtube.com/watch?v=bH3uNrkUTuw

💡 Why Join AIConnect?

This is not just another virtual event. It’s a collaboration platform for AI specialists where you can grow your professional network, showcase your AI/ML portfolio, exchange ideas, and chat directly with fellow participants, recruiters, and service providers. Designed for AI career fair virtual seekers and builders alike, the platform supports live 1-1 video or text chats, and offers open access to diverse community-led channels tailored to your interests.

💬 Brainstorming & Discussion Channels Include:

  • General – Your go-to hub for announcements, orientation, and casual community-wide conversation.
  • Intros – Introduce yourself, your research, your startup, or just share your journey in AI.
  • Networking – Showcase your AI/ML projects, portfolios, startup pitches, or job aspirations.
  • Help Wanted – Looking for team members, mentorship, freelance gigs, or offering support? Post here.
  • Industry Room Tech – Dive into the latest developments in AI frameworks, ML pipelines, data tools, and tech trends.

Whether you’re a beginner or a pro, a researcher or a recruiter — this space empowers growth through project demos, career discussions, and research conversations.

🧠 What You’ll Gain:

  • Direct access to a global AI talent and hiring ecosystem
  • Exposure to cutting-edge projects via the AI project showcase event
  • Connections with peers and thought leaders across AI, ML, and data science
  • Career insights and opportunities from fellow professionals and employers
  • A flexible environment to network, brainstorm, and build meaningful collaborations
  • Instant 1-on-1 video or text chats for deeper engagement

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/jknf0o9m3kix/source--me 🕔 When: Every session starts at 5:00 PM – 7:00 PM local time — recurring monthly on Fridays. Mark your calendar and come prepared to connect!

👥 Who Should Attend:

  • AI engineers, ML developers, and data scientists
  • Startup founders and entrepreneurs in the AI/tech space
  • Recruiters, hiring managers, and project collaborators
  • Students, academics, and early-career professionals
  • Anyone passionate about AI, machine learning, and applied tech innovation

📩 Questions or Suggestions? We’d love to hear from you. Reach us anytime here: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIConnect #AINetworkingEventOnline #AICareerFairVirtual #MachineLearningCommunityMeetup #AIProjectShowcaseEvent #CollaborationPlatformForAISpecialists #MLJobs #GlobalAINetworking #TechNetworkingEvent #AIProfessionalsUnite

AIConnect: Global Virtual Job & Career Networking Event for AI Specialists

Are you an AI engineer, data scientist, machine learning researcher, or aspiring innovator in the world of artificial intelligence? AIConnect is your dynamic AI networking event online designed to foster real-time conversations, career growth, and collaboration opportunities — all within a vibrant and self-led virtual space.

Whether you're exploring job roles in AI, hoping to demo your latest project, or searching for your next co-developer or data scientist partner, AIConnect offers a one-of-a-kind experience to connect with the global machine learning community meetup — anytime, from anywhere.

🎥 Watch how it works in our explainer video: 👉 https://www.youtube.com/watch?v=bH3uNrkUTuw

💡 Why Join AIConnect?

This is not just another virtual event. It’s a collaboration platform for AI specialists where you can grow your professional network, showcase your AI/ML portfolio, exchange ideas, and chat directly with fellow participants, recruiters, and service providers. Designed for AI career fair virtual seekers and builders alike, the platform supports live 1-1 video or text chats, and offers open access to diverse community-led channels tailored to your interests.

💬 Brainstorming & Discussion Channels Include:

  • General – Your go-to hub for announcements, orientation, and casual community-wide conversation.
  • Intros – Introduce yourself, your research, your startup, or just share your journey in AI.
  • Networking – Showcase your AI/ML projects, portfolios, startup pitches, or job aspirations.
  • Help Wanted – Looking for team members, mentorship, freelance gigs, or offering support? Post here.
  • Industry Room Tech – Dive into the latest developments in AI frameworks, ML pipelines, data tools, and tech trends.

Whether you’re a beginner or a pro, a researcher or a recruiter — this space empowers growth through project demos, career discussions, and research conversations.

🧠 What You’ll Gain:

  • Direct access to a global AI talent and hiring ecosystem
  • Exposure to cutting-edge projects via the AI project showcase event
  • Connections with peers and thought leaders across AI, ML, and data science
  • Career insights and opportunities from fellow professionals and employers
  • A flexible environment to network, brainstorm, and build meaningful collaborations
  • Instant 1-on-1 video or text chats for deeper engagement

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/jknf0o9m3kix 🕔 When: Every session starts at 5:00 PM – 7:00 PM local time — recurring monthly on Fridays. Mark your calendar and come prepared to connect!

👥 Who Should Attend:

  • AI engineers, ML developers, and data scientists
  • Startup founders and entrepreneurs in the AI/tech space
  • Recruiters, hiring managers, and project collaborators
  • Students, academics, and early-career professionals
  • Anyone passionate about AI, machine learning, and applied tech innovation

📩 Questions or Suggestions? We’d love to hear from you. Reach us anytime here: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIConnect #AINetworkingEventOnline #AICareerFairVirtual #MachineLearningCommunityMeetup #AIProjectShowcaseEvent #CollaborationPlatformForAISpecialists #MLJobs #GlobalAINetworking #TechNetworkingEvent #AIProfessionalsUnite

AIConnect: Global Virtual Job & Career Networking Event for AI Specialists

Are you working in artificial intelligence, data science, or machine learning? Whether you're a developer, researcher, startup founder, or aspiring tech enthusiast — AIDataTech Connect is your go-to AI networking event online designed to foster career growth, real-time collaboration, and innovation exchange in a dynamic, community-driven environment.

This event brings together a global virtual AI community meetup hosted on a flexible Slack-style platform that supports group discussions and 1-1 video or text chats, making it perfect for professionals looking to connect, collaborate, and build with others in the AI ecosystem.

🎥 Watch how it works: 👉 https://www.youtube.com/watch?v=NRUTXUOFKm4

💬 What to Expect: At AIDataTech Connect, attendees are encouraged to explore, share, and engage across multiple interactive channels designed to support learning, networking, and opportunity discovery:

  • General – A central hub for orientation, announcements, and community-wide discussions.
  • Intros – Introduce yourself, your background, and your AI/tech journey. Spark meaningful introductions.
  • Networking – Share your resume, startup, or project pitch. Meet employers, peers, or collaborators.
  • Help Wanted – Ask for help or offer support on coding, data, research, or AI model deployment.
  • Industry Room Tech – Discuss AI frameworks, tools, and the latest tech trends in data and development.
  • Project Showcase – Perfect for those seeking a machine learning career fair experience or showcasing real-world work through our AI project showcase platform.
  • Collaboration Pods – Jump into discussions around collaboration for AI specialists, including open-source partnerships, research pods, and co-building opportunities.

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9 🕔 When: Event runs globally from 5:00 PM – 7:00 PM local time — designed for participation across time zones.

👥 Who Should Join:

  • AI professionals, ML engineers, and data scientists
  • Tech founders, entrepreneurs, and product leaders
  • Students, job seekers, and aspiring AI talent
  • Recruiters, employers, and hiring managers in AI/tech
  • Researchers, open-source contributors, and peer mentors

🧠 Why Attend:

  • Engage in a truly global virtual tech networking event
  • Discover AI jobs, freelance gigs, or partnership opportunities
  • Connect with experts and peers in AI, ML, and data
  • Explore AI applications and get feedback on your models or tools
  • Network directly via 1-on-1 video or text chats
  • Collaborate on new ideas, research papers, or open-source projects

❓ Questions or Suggestions? We’re here to help. Drop us a note at: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIDataTechConnect #AINetworkingEventOnline #VirtualAICommunityMeetup #MachineLearningCareerFair #AIProjectShowcasePlatform #CollaborationForAISpecialists #DataScienceMeetup #MLJobs #AIResearchNetworking #GlobalTechNetworking #TechCollaborationEvent

AIDataTech Connect: Global Virtual Networking Event for AI, Data & Tech Pros

Are you working in artificial intelligence, data science, or machine learning? Whether you're a developer, researcher, startup founder, or aspiring tech enthusiast — AIDataTech Connect is your go-to AI networking event online designed to foster career growth, real-time collaboration, and innovation exchange in a dynamic, community-driven environment.

This event brings together a global virtual AI community meetup hosted on a flexible Slack-style platform that supports group discussions and 1-1 video or text chats, making it perfect for professionals looking to connect, collaborate, and build with others in the AI ecosystem.

🎥 Watch how it works: 👉 https://www.youtube.com/watch?v=NRUTXUOFKm4

💬 What to Expect: At AIDataTech Connect, attendees are encouraged to explore, share, and engage across multiple interactive channels designed to support learning, networking, and opportunity discovery:

  • General – A central hub for orientation, announcements, and community-wide discussions.
  • Intros – Introduce yourself, your background, and your AI/tech journey. Spark meaningful introductions.
  • Networking – Share your resume, startup, or project pitch. Meet employers, peers, or collaborators.
  • Help Wanted – Ask for help or offer support on coding, data, research, or AI model deployment.
  • Industry Room Tech – Discuss AI frameworks, tools, and the latest tech trends in data and development.
  • Project Showcase – Perfect for those seeking a machine learning career fair experience or showcasing real-world work through our AI project showcase platform.
  • Collaboration Pods – Jump into discussions around collaboration for AI specialists, including open-source partnerships, research pods, and co-building opportunities.

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9 🕔 When: Event runs globally from 5:00 PM – 7:00 PM local time — designed for participation across time zones.

👥 Who Should Join:

  • AI professionals, ML engineers, and data scientists
  • Tech founders, entrepreneurs, and product leaders
  • Students, job seekers, and aspiring AI talent
  • Recruiters, employers, and hiring managers in AI/tech
  • Researchers, open-source contributors, and peer mentors

🧠 Why Attend:

  • Engage in a truly global virtual tech networking event
  • Discover AI jobs, freelance gigs, or partnership opportunities
  • Connect with experts and peers in AI, ML, and data
  • Explore AI applications and get feedback on your models or tools
  • Network directly via 1-on-1 video or text chats
  • Collaborate on new ideas, research papers, or open-source projects

❓ Questions or Suggestions? We’re here to help. Drop us a note at: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIDataTechConnect #AINetworkingEventOnline #VirtualAICommunityMeetup #MachineLearningCareerFair #AIProjectShowcasePlatform #CollaborationForAISpecialists #DataScienceMeetup #MLJobs #AIResearchNetworking #GlobalTechNetworking #TechCollaborationEvent

AIDataTech Connect: Global Virtual Networking Event for AI, Data & Tech Pros

Are you working in artificial intelligence, data science, or machine learning? Whether you're a developer, researcher, startup founder, or aspiring tech enthusiast — AIDataTech Connect is your go-to AI networking event online designed to foster career growth, real-time collaboration, and innovation exchange in a dynamic, community-driven environment.

This event brings together a global virtual AI community meetup hosted on a flexible Slack-style platform that supports group discussions and 1-1 video or text chats, making it perfect for professionals looking to connect, collaborate, and build with others in the AI ecosystem.

🎥 Watch how it works: 👉 https://www.youtube.com/watch?v=NRUTXUOFKm4

💬 What to Expect: At AIDataTech Connect, attendees are encouraged to explore, share, and engage across multiple interactive channels designed to support learning, networking, and opportunity discovery:

  • General – A central hub for orientation, announcements, and community-wide discussions.
  • Intros – Introduce yourself, your background, and your AI/tech journey. Spark meaningful introductions.
  • Networking – Share your resume, startup, or project pitch. Meet employers, peers, or collaborators.
  • Help Wanted – Ask for help or offer support on coding, data, research, or AI model deployment.
  • Industry Room Tech – Discuss AI frameworks, tools, and the latest tech trends in data and development.
  • Project Showcase – Perfect for those seeking a machine learning career fair experience or showcasing real-world work through our AI project showcase platform.
  • Collaboration Pods – Jump into discussions around collaboration for AI specialists, including open-source partnerships, research pods, and co-building opportunities.

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9 🕔 When: Event runs globally from 5:00 PM – 7:00 PM local time — designed for participation across time zones.

👥 Who Should Join:

  • AI professionals, ML engineers, and data scientists
  • Tech founders, entrepreneurs, and product leaders
  • Students, job seekers, and aspiring AI talent
  • Recruiters, employers, and hiring managers in AI/tech
  • Researchers, open-source contributors, and peer mentors

🧠 Why Attend:

  • Engage in a truly global virtual tech networking event
  • Discover AI jobs, freelance gigs, or partnership opportunities
  • Connect with experts and peers in AI, ML, and data
  • Explore AI applications and get feedback on your models or tools
  • Network directly via 1-on-1 video or text chats
  • Collaborate on new ideas, research papers, or open-source projects

❓ Questions or Suggestions? We’re here to help. Drop us a note at: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIDataTechConnect #AINetworkingEventOnline #VirtualAICommunityMeetup #MachineLearningCareerFair #AIProjectShowcasePlatform #CollaborationForAISpecialists #DataScienceMeetup #MLJobs #AIResearchNetworking #GlobalTechNetworking #TechCollaborationEvent

AIDataTech Connect: Global Virtual Networking Event for AI, Data & Tech Pros

Are you working in artificial intelligence, data science, or machine learning? Whether you're a developer, researcher, startup founder, or aspiring tech enthusiast — AIDataTech Connect is your go-to AI networking event online designed to foster career growth, real-time collaboration, and innovation exchange in a dynamic, community-driven environment.

This event brings together a global virtual AI community meetup hosted on a flexible Slack-style platform that supports group discussions and 1-1 video or text chats, making it perfect for professionals looking to connect, collaborate, and build with others in the AI ecosystem.

🎥 Watch how it works: 👉 https://www.youtube.com/watch?v=NRUTXUOFKm4

💬 What to Expect: At AIDataTech Connect, attendees are encouraged to explore, share, and engage across multiple interactive channels designed to support learning, networking, and opportunity discovery:

  • General – A central hub for orientation, announcements, and community-wide discussions.
  • Intros – Introduce yourself, your background, and your AI/tech journey. Spark meaningful introductions.
  • Networking – Share your resume, startup, or project pitch. Meet employers, peers, or collaborators.
  • Help Wanted – Ask for help or offer support on coding, data, research, or AI model deployment.
  • Industry Room Tech – Discuss AI frameworks, tools, and the latest tech trends in data and development.
  • Project Showcase – Perfect for those seeking a machine learning career fair experience or showcasing real-world work through our AI project showcase platform.
  • Collaboration Pods – Jump into discussions around collaboration for AI specialists, including open-source partnerships, research pods, and co-building opportunities.

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9 🕔 When: Event runs globally from 5:00 PM – 7:00 PM local time — designed for participation across time zones.

👥 Who Should Join:

  • AI professionals, ML engineers, and data scientists
  • Tech founders, entrepreneurs, and product leaders
  • Students, job seekers, and aspiring AI talent
  • Recruiters, employers, and hiring managers in AI/tech
  • Researchers, open-source contributors, and peer mentors

🧠 Why Attend:

  • Engage in a truly global virtual tech networking event
  • Discover AI jobs, freelance gigs, or partnership opportunities
  • Connect with experts and peers in AI, ML, and data
  • Explore AI applications and get feedback on your models or tools
  • Network directly via 1-on-1 video or text chats
  • Collaborate on new ideas, research papers, or open-source projects

❓ Questions or Suggestions? We’re here to help. Drop us a note at: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIDataTechConnect #AINetworkingEventOnline #VirtualAICommunityMeetup #MachineLearningCareerFair #AIProjectShowcasePlatform #CollaborationForAISpecialists #DataScienceMeetup #MLJobs #AIResearchNetworking #GlobalTechNetworking #TechCollaborationEvent

AIDataTech Connect: Global Virtual Networking Event for AI, Data & Tech Pros

Are you working in artificial intelligence, data science, or machine learning? Whether you're a developer, researcher, startup founder, or aspiring tech enthusiast — AIDataTech Connect is your go-to AI networking event online designed to foster career growth, real-time collaboration, and innovation exchange in a dynamic, community-driven environment.

This event brings together a global virtual AI community meetup hosted on a flexible Slack-style platform that supports group discussions and 1-1 video or text chats, making it perfect for professionals looking to connect, collaborate, and build with others in the AI ecosystem.

🎥 Watch how it works: 👉 https://www.youtube.com/watch?v=NRUTXUOFKm4

💬 What to Expect: At AIDataTech Connect, attendees are encouraged to explore, share, and engage across multiple interactive channels designed to support learning, networking, and opportunity discovery:

  • General – A central hub for orientation, announcements, and community-wide discussions.
  • Intros – Introduce yourself, your background, and your AI/tech journey. Spark meaningful introductions.
  • Networking – Share your resume, startup, or project pitch. Meet employers, peers, or collaborators.
  • Help Wanted – Ask for help or offer support on coding, data, research, or AI model deployment.
  • Industry Room Tech – Discuss AI frameworks, tools, and the latest tech trends in data and development.
  • Project Showcase – Perfect for those seeking a machine learning career fair experience or showcasing real-world work through our AI project showcase platform.
  • Collaboration Pods – Jump into discussions around collaboration for AI specialists, including open-source partnerships, research pods, and co-building opportunities.

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9 🕔 When: Event runs globally from 5:00 PM – 7:00 PM local time — designed for participation across time zones.

👥 Who Should Join:

  • AI professionals, ML engineers, and data scientists
  • Tech founders, entrepreneurs, and product leaders
  • Students, job seekers, and aspiring AI talent
  • Recruiters, employers, and hiring managers in AI/tech
  • Researchers, open-source contributors, and peer mentors

🧠 Why Attend:

  • Engage in a truly global virtual tech networking event
  • Discover AI jobs, freelance gigs, or partnership opportunities
  • Connect with experts and peers in AI, ML, and data
  • Explore AI applications and get feedback on your models or tools
  • Network directly via 1-on-1 video or text chats
  • Collaborate on new ideas, research papers, or open-source projects

❓ Questions or Suggestions? We’re here to help. Drop us a note at: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIDataTechConnect #AINetworkingEventOnline #VirtualAICommunityMeetup #MachineLearningCareerFair #AIProjectShowcasePlatform #CollaborationForAISpecialists #DataScienceMeetup #MLJobs #AIResearchNetworking #GlobalTechNetworking #TechCollaborationEvent

AIDataTech Connect: Global Virtual Networking Event for AI, Data & Tech Pros

Are you working in artificial intelligence, data science, or machine learning? Whether you're a developer, researcher, startup founder, or aspiring tech enthusiast — AIDataTech Connect is your go-to AI networking event online designed to foster career growth, real-time collaboration, and innovation exchange in a dynamic, community-driven environment.

This event brings together a global virtual AI community meetup hosted on a flexible Slack-style platform that supports group discussions and 1-1 video or text chats, making it perfect for professionals looking to connect, collaborate, and build with others in the AI ecosystem.

🎥 Watch how it works: 👉 https://www.youtube.com/watch?v=NRUTXUOFKm4

💬 What to Expect: At AIDataTech Connect, attendees are encouraged to explore, share, and engage across multiple interactive channels designed to support learning, networking, and opportunity discovery:

  • General – A central hub for orientation, announcements, and community-wide discussions.
  • Intros – Introduce yourself, your background, and your AI/tech journey. Spark meaningful introductions.
  • Networking – Share your resume, startup, or project pitch. Meet employers, peers, or collaborators.
  • Help Wanted – Ask for help or offer support on coding, data, research, or AI model deployment.
  • Industry Room Tech – Discuss AI frameworks, tools, and the latest tech trends in data and development.
  • Project Showcase – Perfect for those seeking a machine learning career fair experience or showcasing real-world work through our AI project showcase platform.
  • Collaboration Pods – Jump into discussions around collaboration for AI specialists, including open-source partnerships, research pods, and co-building opportunities.

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9 🕔 When: Event runs globally from 5:00 PM – 7:00 PM local time — designed for participation across time zones.

👥 Who Should Join:

  • AI professionals, ML engineers, and data scientists
  • Tech founders, entrepreneurs, and product leaders
  • Students, job seekers, and aspiring AI talent
  • Recruiters, employers, and hiring managers in AI/tech
  • Researchers, open-source contributors, and peer mentors

🧠 Why Attend:

  • Engage in a truly global virtual tech networking event
  • Discover AI jobs, freelance gigs, or partnership opportunities
  • Connect with experts and peers in AI, ML, and data
  • Explore AI applications and get feedback on your models or tools
  • Network directly via 1-on-1 video or text chats
  • Collaborate on new ideas, research papers, or open-source projects

❓ Questions or Suggestions? We’re here to help. Drop us a note at: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIDataTechConnect #AINetworkingEventOnline #VirtualAICommunityMeetup #MachineLearningCareerFair #AIProjectShowcasePlatform #CollaborationForAISpecialists #DataScienceMeetup #MLJobs #AIResearchNetworking #GlobalTechNetworking #TechCollaborationEvent

AIDataTech Connect: Global Virtual Networking Event for AI, Data & Tech Pros

Provisional date -The official event date will be announced soon!

📌Abstract Vibe coding captures something real: momentum. The ability to move from an idea to a working prototype at surprising speed, guided by intent, context, and AI assistance. But in an enterprise environment, speed alone isn’t enough — software must also be secure, compliant, observable, and built to last.

We'll see how teams can build applications in a single, end-to-end development flow — from business requirements and early prototypes to engineering, testing, delivery, and production — without breaking context or introducing late-stage rework.

At the center of this flow is the software catalog: a living system of record that connects services, APIs, data, ownership, dependencies, and standards. More than documentation, the catalog becomes the shared interface for developers, platform teams, and AI agents to understand the system and act consistently, within enterprise rules and guardrails.

The goal isn’t to limit creativity, but to make flow sustainable — turning AI-driven momentum into software that’s ready for production.

🔍 Key topics • From vibe-coding to ai-assisted delivery in production • Software catalog as the system of record for enterprise context • AI agents operating in context continuity within guardrails

⏲️ Agenda 18.30* Welcoming 18.45 Talk 19:30 Q&A 19:45 Closing remarks & Networking and see you at the next Meetup!

*You will receive detailed information on how to access the building as soon as available.

🎙️Speaker Giulio Roggero CTO @ Mia-Platform 25 years of experience in software engineering, serial entrepreneur with more than 10 business initiatives launched, today is co-founder and CTO at Mia-Platform, the Internal Developer Platform named by Gartner Cool Vendor for Software Engineering Technologies and in the Cloud Application Platforms Magic Quadrant.

Principal focus: cloud native, platform engineering, data fabric and omnichannel experience. He likes to paint Blood Bowl miniatures, construct Lego, build and drive RC Cars and learn piano.

Vibe Coding the Enterprise: From Flow State to Focused Delivery

Are you an AI engineer, data scientist, machine learning researcher, or aspiring innovator in the world of artificial intelligence? AIConnect is your dynamic AI networking event online designed to foster real-time conversations, career growth, and collaboration opportunities — all within a vibrant and self-led virtual space.

Whether you're exploring job roles in AI, hoping to demo your latest project, or searching for your next co-developer or data scientist partner, AIConnect offers a one-of-a-kind experience to connect with the global machine learning community meetup — anytime, from anywhere.

🎥 Watch how it works in our explainer video: 👉 https://www.youtube.com/watch?v=bH3uNrkUTuw

💡 Why Join AIConnect?

This is not just another virtual event. It’s a collaboration platform for AI specialists where you can grow your professional network, showcase your AI/ML portfolio, exchange ideas, and chat directly with fellow participants, recruiters, and service providers. Designed for AI career fair virtual seekers and builders alike, the platform supports live 1-1 video or text chats, and offers open access to diverse community-led channels tailored to your interests.

💬 Brainstorming & Discussion Channels Include:

  • General – Your go-to hub for announcements, orientation, and casual community-wide conversation.
  • Intros – Introduce yourself, your research, your startup, or just share your journey in AI.
  • Networking – Showcase your AI/ML projects, portfolios, startup pitches, or job aspirations.
  • Help Wanted – Looking for team members, mentorship, freelance gigs, or offering support? Post here.
  • Industry Room Tech – Dive into the latest developments in AI frameworks, ML pipelines, data tools, and tech trends.

Whether you’re a beginner or a pro, a researcher or a recruiter — this space empowers growth through project demos, career discussions, and research conversations.

🧠 What You’ll Gain:

  • Direct access to a global AI talent and hiring ecosystem
  • Exposure to cutting-edge projects via the AI project showcase event
  • Connections with peers and thought leaders across AI, ML, and data science
  • Career insights and opportunities from fellow professionals and employers
  • A flexible environment to network, brainstorm, and build meaningful collaborations
  • Instant 1-on-1 video or text chats for deeper engagement

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/jknf0o9m3kix/source--me 🕔 When: Every session starts at 5:00 PM – 7:00 PM local time — recurring monthly on Fridays. Mark your calendar and come prepared to connect!

👥 Who Should Attend:

  • AI engineers, ML developers, and data scientists
  • Startup founders and entrepreneurs in the AI/tech space
  • Recruiters, hiring managers, and project collaborators
  • Students, academics, and early-career professionals
  • Anyone passionate about AI, machine learning, and applied tech innovation

📩 Questions or Suggestions? We’d love to hear from you. Reach us anytime here: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIConnect #AINetworkingEventOnline #AICareerFairVirtual #MachineLearningCommunityMeetup #AIProjectShowcaseEvent #CollaborationPlatformForAISpecialists #MLJobs #GlobalAINetworking #TechNetworkingEvent #AIProfessionalsUnite

AIConnect: Global Virtual Job & Career Networking Event for AI Specialists

Are you an AI engineer, data scientist, machine learning researcher, or aspiring innovator in the world of artificial intelligence? AIConnect is your dynamic AI networking event online designed to foster real-time conversations, career growth, and collaboration opportunities — all within a vibrant and self-led virtual space.

Whether you're exploring job roles in AI, hoping to demo your latest project, or searching for your next co-developer or data scientist partner, AIConnect offers a one-of-a-kind experience to connect with the global machine learning community meetup — anytime, from anywhere.

🎥 Watch how it works in our explainer video: 👉 https://www.youtube.com/watch?v=bH3uNrkUTuw

💡 Why Join AIConnect?

This is not just another virtual event. It’s a collaboration platform for AI specialists where you can grow your professional network, showcase your AI/ML portfolio, exchange ideas, and chat directly with fellow participants, recruiters, and service providers. Designed for AI career fair virtual seekers and builders alike, the platform supports live 1-1 video or text chats, and offers open access to diverse community-led channels tailored to your interests.

💬 Brainstorming & Discussion Channels Include:

  • General – Your go-to hub for announcements, orientation, and casual community-wide conversation.
  • Intros – Introduce yourself, your research, your startup, or just share your journey in AI.
  • Networking – Showcase your AI/ML projects, portfolios, startup pitches, or job aspirations.
  • Help Wanted – Looking for team members, mentorship, freelance gigs, or offering support? Post here.
  • Industry Room Tech – Dive into the latest developments in AI frameworks, ML pipelines, data tools, and tech trends.

Whether you’re a beginner or a pro, a researcher or a recruiter — this space empowers growth through project demos, career discussions, and research conversations.

🧠 What You’ll Gain:

  • Direct access to a global AI talent and hiring ecosystem
  • Exposure to cutting-edge projects via the AI project showcase event
  • Connections with peers and thought leaders across AI, ML, and data science
  • Career insights and opportunities from fellow professionals and employers
  • A flexible environment to network, brainstorm, and build meaningful collaborations
  • Instant 1-on-1 video or text chats for deeper engagement

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/jknf0o9m3kix 🕔 When: Every session starts at 5:00 PM – 7:00 PM local time — recurring monthly on Fridays. Mark your calendar and come prepared to connect!

👥 Who Should Attend:

  • AI engineers, ML developers, and data scientists
  • Startup founders and entrepreneurs in the AI/tech space
  • Recruiters, hiring managers, and project collaborators
  • Students, academics, and early-career professionals
  • Anyone passionate about AI, machine learning, and applied tech innovation

📩 Questions or Suggestions? We’d love to hear from you. Reach us anytime here: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIConnect #AINetworkingEventOnline #AICareerFairVirtual #MachineLearningCommunityMeetup #AIProjectShowcaseEvent #CollaborationPlatformForAISpecialists #MLJobs #GlobalAINetworking #TechNetworkingEvent #AIProfessionalsUnite

AIConnect: Global Virtual Job & Career Networking Event for AI Specialists

Join our virtual Meetup to hear talks from experts on cutting-edge topics at the intersection of Visual AI and video use cases.

Time and Location

Feb 11, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

VIDEOP2R: Video Understanding from Perception to Reasoning

Reinforcement fine-tuning (RFT), a two-stage framework consisting of supervised fine-tuning (SFT) and reinforcement learning (RL) has shown promising results on improving reasoning ability of large language models (LLMs). Yet extending RFT to large video language models (LVLMs) remains challenging. We propose VideoP2R, a novel process-aware video RFT framework that enhances video reasoning by modeling perception and reasoning as distinct processes. In the SFT stage, we develop a three-step pipeline to generate VideoP2R-CoT-162K, a high-quality, process-aware chain-of-thought (CoT) dataset for perception and reasoning.

In the RL stage, we introduce a novel process-aware group relative policy optimization (PA-GRPO) algorithm that supplies separate rewards for perception and reasoning. Extensive experiments show that VideoP2R achieves state-of-the-art (SotA) performance on six out of seven video reasoning and understanding benchmarks. Ablation studies further confirm the effectiveness of our process-aware modeling and PA-GRPO and demonstrate that model's perception output is information-sufficient for downstream reasoning.

About the Speaker

Yifan Jiang is a third-year Ph.D. student in the Information Science Institute at the University of Southern California (USC-ISI), advised by Dr. Jay Pujara, focusing on natural language processing, commonsense reasoning and multimodality large language models.

Layer-Aware Video Composition via Split-then-Merge

Split-then-Merge (StM) is a novel generative framework that overcomes data scarcity in video composition by splitting unlabeled videos into separate foreground and background layers for self-supervised learning. By utilizing a transformation-aware training pipeline with multi-layer fusion, the model learns to realistically compose dynamic subjects into diverse scenes without relying on expensive annotated datasets. This presentation will cover the problem of video composition and the details of StM, an approach looking at this problem from a generative AI perspective. We will conclude by demonstrating how StM is working, and outperforming state-of-the-art methods in both quantitative benchmarks and qualitative evaluations.

About the Speaker

Ozgur Kara is a 4th year Computer Science PhD student at the University of Illinois Urbana-Champaign (UIUC), advised by Founder Professor James M. Rehg. His research builds the next generation of video AI by tackling three core challenges: efficiency, controllability, and safety.

Video Reasoning for Worker Safety

Ensuring worker safety in industrial environments requires more than object detection or motion tracking; it demands a genuine understanding of human actions, context, and risk. This talk demonstrates how NVIDIA Cosmos Reason, a multimodal video-reasoning model, interprets workplace scenarios with sophisticated temporal and semantic awareness, identifying nuanced safe and unsafe behaviors that conventional vision systems frequently overlook.

By integrating Cosmos Reason with FiftyOne, users achieve both automated safety assessments and transparent, interpretable explanations revealing why specific actions are deemed hazardous. Using a curated worker-safety dataset of authentic factory-floor footage, we show how video reasoning enhances audits, training, and compliance workflows while minimizing dependence on extensive labeled datasets. The resulting system demonstrates the potential of explainable multimodal AI to enable safer, more informed decision-making across manufacturing, logistics, construction, healthcare, and other sectors where understanding human behavior is essential.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Video Intelligence Is Going Agentic

Video content has become ubiquitous in our digital world, yet the tools for working with video have remained largely unchanged for decades. This talk explores how the convergence of foundation models and agent architectures is fundamentally transforming video interaction and creation. We'll examine how video-native foundation models, multimodal interfaces, and agent transparency are reshaping enterprise media workflows through a deep dive into Jockey, a pioneering video agent system.

About the Speaker

James Le currently leads the developer experience function at TwelveLabs - a startup building foundation models for video understanding. He previously operated in the MLOps space and ran a blog/podcast on the Data & AI infrastructure ecosystem.

Feb 11 - Visual AI for Video Use Cases

Join our virtual Meetup to hear talks from experts on cutting-edge topics at the intersection of Visual AI and video use cases.

Time and Location

Feb 11, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

VIDEOP2R: Video Understanding from Perception to Reasoning

Reinforcement fine-tuning (RFT), a two-stage framework consisting of supervised fine-tuning (SFT) and reinforcement learning (RL) has shown promising results on improving reasoning ability of large language models (LLMs). Yet extending RFT to large video language models (LVLMs) remains challenging. We propose VideoP2R, a novel process-aware video RFT framework that enhances video reasoning by modeling perception and reasoning as distinct processes. In the SFT stage, we develop a three-step pipeline to generate VideoP2R-CoT-162K, a high-quality, process-aware chain-of-thought (CoT) dataset for perception and reasoning.

In the RL stage, we introduce a novel process-aware group relative policy optimization (PA-GRPO) algorithm that supplies separate rewards for perception and reasoning. Extensive experiments show that VideoP2R achieves state-of-the-art (SotA) performance on six out of seven video reasoning and understanding benchmarks. Ablation studies further confirm the effectiveness of our process-aware modeling and PA-GRPO and demonstrate that model's perception output is information-sufficient for downstream reasoning.

About the Speaker

Yifan Jiang is a third-year Ph.D. student in the Information Science Institute at the University of Southern California (USC-ISI), advised by Dr. Jay Pujara, focusing on natural language processing, commonsense reasoning and multimodality large language models.

Layer-Aware Video Composition via Split-then-Merge

Split-then-Merge (StM) is a novel generative framework that overcomes data scarcity in video composition by splitting unlabeled videos into separate foreground and background layers for self-supervised learning. By utilizing a transformation-aware training pipeline with multi-layer fusion, the model learns to realistically compose dynamic subjects into diverse scenes without relying on expensive annotated datasets. This presentation will cover the problem of video composition and the details of StM, an approach looking at this problem from a generative AI perspective. We will conclude by demonstrating how StM is working, and outperforming state-of-the-art methods in both quantitative benchmarks and qualitative evaluations.

About the Speaker

Ozgur Kara is a 4th year Computer Science PhD student at the University of Illinois Urbana-Champaign (UIUC), advised by Founder Professor James M. Rehg. His research builds the next generation of video AI by tackling three core challenges: efficiency, controllability, and safety.

Video Reasoning for Worker Safety

Ensuring worker safety in industrial environments requires more than object detection or motion tracking; it demands a genuine understanding of human actions, context, and risk. This talk demonstrates how NVIDIA Cosmos Reason, a multimodal video-reasoning model, interprets workplace scenarios with sophisticated temporal and semantic awareness, identifying nuanced safe and unsafe behaviors that conventional vision systems frequently overlook.

By integrating Cosmos Reason with FiftyOne, users achieve both automated safety assessments and transparent, interpretable explanations revealing why specific actions are deemed hazardous. Using a curated worker-safety dataset of authentic factory-floor footage, we show how video reasoning enhances audits, training, and compliance workflows while minimizing dependence on extensive labeled datasets. The resulting system demonstrates the potential of explainable multimodal AI to enable safer, more informed decision-making across manufacturing, logistics, construction, healthcare, and other sectors where understanding human behavior is essential.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Video Intelligence Is Going Agentic

Video content has become ubiquitous in our digital world, yet the tools for working with video have remained largely unchanged for decades. This talk explores how the convergence of foundation models and agent architectures is fundamentally transforming video interaction and creation. We'll examine how video-native foundation models, multimodal interfaces, and agent transparency are reshaping enterprise media workflows through a deep dive into Jockey, a pioneering video agent system.

About the Speaker

James Le currently leads the developer experience function at TwelveLabs - a startup building foundation models for video understanding. He previously operated in the MLOps space and ran a blog/podcast on the Data & AI infrastructure ecosystem.

Feb 11 - Visual AI for Video Use Cases

Join our virtual Meetup to hear talks from experts on cutting-edge topics at the intersection of Visual AI and video use cases.

Time and Location

Feb 11, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

VIDEOP2R: Video Understanding from Perception to Reasoning

Reinforcement fine-tuning (RFT), a two-stage framework consisting of supervised fine-tuning (SFT) and reinforcement learning (RL) has shown promising results on improving reasoning ability of large language models (LLMs). Yet extending RFT to large video language models (LVLMs) remains challenging. We propose VideoP2R, a novel process-aware video RFT framework that enhances video reasoning by modeling perception and reasoning as distinct processes. In the SFT stage, we develop a three-step pipeline to generate VideoP2R-CoT-162K, a high-quality, process-aware chain-of-thought (CoT) dataset for perception and reasoning.

In the RL stage, we introduce a novel process-aware group relative policy optimization (PA-GRPO) algorithm that supplies separate rewards for perception and reasoning. Extensive experiments show that VideoP2R achieves state-of-the-art (SotA) performance on six out of seven video reasoning and understanding benchmarks. Ablation studies further confirm the effectiveness of our process-aware modeling and PA-GRPO and demonstrate that model's perception output is information-sufficient for downstream reasoning.

About the Speaker

Yifan Jiang is a third-year Ph.D. student in the Information Science Institute at the University of Southern California (USC-ISI), advised by Dr. Jay Pujara, focusing on natural language processing, commonsense reasoning and multimodality large language models.

Layer-Aware Video Composition via Split-then-Merge

Split-then-Merge (StM) is a novel generative framework that overcomes data scarcity in video composition by splitting unlabeled videos into separate foreground and background layers for self-supervised learning. By utilizing a transformation-aware training pipeline with multi-layer fusion, the model learns to realistically compose dynamic subjects into diverse scenes without relying on expensive annotated datasets. This presentation will cover the problem of video composition and the details of StM, an approach looking at this problem from a generative AI perspective. We will conclude by demonstrating how StM is working, and outperforming state-of-the-art methods in both quantitative benchmarks and qualitative evaluations.

About the Speaker

Ozgur Kara is a 4th year Computer Science PhD student at the University of Illinois Urbana-Champaign (UIUC), advised by Founder Professor James M. Rehg. His research builds the next generation of video AI by tackling three core challenges: efficiency, controllability, and safety.

Video Reasoning for Worker Safety

Ensuring worker safety in industrial environments requires more than object detection or motion tracking; it demands a genuine understanding of human actions, context, and risk. This talk demonstrates how NVIDIA Cosmos Reason, a multimodal video-reasoning model, interprets workplace scenarios with sophisticated temporal and semantic awareness, identifying nuanced safe and unsafe behaviors that conventional vision systems frequently overlook.

By integrating Cosmos Reason with FiftyOne, users achieve both automated safety assessments and transparent, interpretable explanations revealing why specific actions are deemed hazardous. Using a curated worker-safety dataset of authentic factory-floor footage, we show how video reasoning enhances audits, training, and compliance workflows while minimizing dependence on extensive labeled datasets. The resulting system demonstrates the potential of explainable multimodal AI to enable safer, more informed decision-making across manufacturing, logistics, construction, healthcare, and other sectors where understanding human behavior is essential.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Video Intelligence Is Going Agentic

Video content has become ubiquitous in our digital world, yet the tools for working with video have remained largely unchanged for decades. This talk explores how the convergence of foundation models and agent architectures is fundamentally transforming video interaction and creation. We'll examine how video-native foundation models, multimodal interfaces, and agent transparency are reshaping enterprise media workflows through a deep dive into Jockey, a pioneering video agent system.

About the Speaker

James Le currently leads the developer experience function at TwelveLabs - a startup building foundation models for video understanding. He previously operated in the MLOps space and ran a blog/podcast on the Data & AI infrastructure ecosystem.

Feb 11 - Visual AI for Video Use Cases

Join our virtual Meetup to hear talks from experts on cutting-edge topics at the intersection of Visual AI and video use cases.

Time and Location

Feb 11, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

VIDEOP2R: Video Understanding from Perception to Reasoning

Reinforcement fine-tuning (RFT), a two-stage framework consisting of supervised fine-tuning (SFT) and reinforcement learning (RL) has shown promising results on improving reasoning ability of large language models (LLMs). Yet extending RFT to large video language models (LVLMs) remains challenging. We propose VideoP2R, a novel process-aware video RFT framework that enhances video reasoning by modeling perception and reasoning as distinct processes. In the SFT stage, we develop a three-step pipeline to generate VideoP2R-CoT-162K, a high-quality, process-aware chain-of-thought (CoT) dataset for perception and reasoning.

In the RL stage, we introduce a novel process-aware group relative policy optimization (PA-GRPO) algorithm that supplies separate rewards for perception and reasoning. Extensive experiments show that VideoP2R achieves state-of-the-art (SotA) performance on six out of seven video reasoning and understanding benchmarks. Ablation studies further confirm the effectiveness of our process-aware modeling and PA-GRPO and demonstrate that model's perception output is information-sufficient for downstream reasoning.

About the Speaker

Yifan Jiang is a third-year Ph.D. student in the Information Science Institute at the University of Southern California (USC-ISI), advised by Dr. Jay Pujara, focusing on natural language processing, commonsense reasoning and multimodality large language models.

Layer-Aware Video Composition via Split-then-Merge

Split-then-Merge (StM) is a novel generative framework that overcomes data scarcity in video composition by splitting unlabeled videos into separate foreground and background layers for self-supervised learning. By utilizing a transformation-aware training pipeline with multi-layer fusion, the model learns to realistically compose dynamic subjects into diverse scenes without relying on expensive annotated datasets. This presentation will cover the problem of video composition and the details of StM, an approach looking at this problem from a generative AI perspective. We will conclude by demonstrating how StM is working, and outperforming state-of-the-art methods in both quantitative benchmarks and qualitative evaluations.

About the Speaker

Ozgur Kara is a 4th year Computer Science PhD student at the University of Illinois Urbana-Champaign (UIUC), advised by Founder Professor James M. Rehg. His research builds the next generation of video AI by tackling three core challenges: efficiency, controllability, and safety.

Video Reasoning for Worker Safety

Ensuring worker safety in industrial environments requires more than object detection or motion tracking; it demands a genuine understanding of human actions, context, and risk. This talk demonstrates how NVIDIA Cosmos Reason, a multimodal video-reasoning model, interprets workplace scenarios with sophisticated temporal and semantic awareness, identifying nuanced safe and unsafe behaviors that conventional vision systems frequently overlook.

By integrating Cosmos Reason with FiftyOne, users achieve both automated safety assessments and transparent, interpretable explanations revealing why specific actions are deemed hazardous. Using a curated worker-safety dataset of authentic factory-floor footage, we show how video reasoning enhances audits, training, and compliance workflows while minimizing dependence on extensive labeled datasets. The resulting system demonstrates the potential of explainable multimodal AI to enable safer, more informed decision-making across manufacturing, logistics, construction, healthcare, and other sectors where understanding human behavior is essential.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Video Intelligence Is Going Agentic

Video content has become ubiquitous in our digital world, yet the tools for working with video have remained largely unchanged for decades. This talk explores how the convergence of foundation models and agent architectures is fundamentally transforming video interaction and creation. We'll examine how video-native foundation models, multimodal interfaces, and agent transparency are reshaping enterprise media workflows through a deep dive into Jockey, a pioneering video agent system.

About the Speaker

James Le currently leads the developer experience function at TwelveLabs - a startup building foundation models for video understanding. He previously operated in the MLOps space and ran a blog/podcast on the Data & AI infrastructure ecosystem.

Feb 11 - Visual AI for Video Use Cases

Join our virtual Meetup to hear talks from experts on cutting-edge topics at the intersection of Visual AI and video use cases.

Time and Location

Feb 11, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

VIDEOP2R: Video Understanding from Perception to Reasoning

Reinforcement fine-tuning (RFT), a two-stage framework consisting of supervised fine-tuning (SFT) and reinforcement learning (RL) has shown promising results on improving reasoning ability of large language models (LLMs). Yet extending RFT to large video language models (LVLMs) remains challenging. We propose VideoP2R, a novel process-aware video RFT framework that enhances video reasoning by modeling perception and reasoning as distinct processes. In the SFT stage, we develop a three-step pipeline to generate VideoP2R-CoT-162K, a high-quality, process-aware chain-of-thought (CoT) dataset for perception and reasoning.

In the RL stage, we introduce a novel process-aware group relative policy optimization (PA-GRPO) algorithm that supplies separate rewards for perception and reasoning. Extensive experiments show that VideoP2R achieves state-of-the-art (SotA) performance on six out of seven video reasoning and understanding benchmarks. Ablation studies further confirm the effectiveness of our process-aware modeling and PA-GRPO and demonstrate that model's perception output is information-sufficient for downstream reasoning.

About the Speaker

Yifan Jiang is a third-year Ph.D. student in the Information Science Institute at the University of Southern California (USC-ISI), advised by Dr. Jay Pujara, focusing on natural language processing, commonsense reasoning and multimodality large language models.

Layer-Aware Video Composition via Split-then-Merge

Split-then-Merge (StM) is a novel generative framework that overcomes data scarcity in video composition by splitting unlabeled videos into separate foreground and background layers for self-supervised learning. By utilizing a transformation-aware training pipeline with multi-layer fusion, the model learns to realistically compose dynamic subjects into diverse scenes without relying on expensive annotated datasets. This presentation will cover the problem of video composition and the details of StM, an approach looking at this problem from a generative AI perspective. We will conclude by demonstrating how StM is working, and outperforming state-of-the-art methods in both quantitative benchmarks and qualitative evaluations.

About the Speaker

Ozgur Kara is a 4th year Computer Science PhD student at the University of Illinois Urbana-Champaign (UIUC), advised by Founder Professor James M. Rehg. His research builds the next generation of video AI by tackling three core challenges: efficiency, controllability, and safety.

Video Reasoning for Worker Safety

Ensuring worker safety in industrial environments requires more than object detection or motion tracking; it demands a genuine understanding of human actions, context, and risk. This talk demonstrates how NVIDIA Cosmos Reason, a multimodal video-reasoning model, interprets workplace scenarios with sophisticated temporal and semantic awareness, identifying nuanced safe and unsafe behaviors that conventional vision systems frequently overlook.

By integrating Cosmos Reason with FiftyOne, users achieve both automated safety assessments and transparent, interpretable explanations revealing why specific actions are deemed hazardous. Using a curated worker-safety dataset of authentic factory-floor footage, we show how video reasoning enhances audits, training, and compliance workflows while minimizing dependence on extensive labeled datasets. The resulting system demonstrates the potential of explainable multimodal AI to enable safer, more informed decision-making across manufacturing, logistics, construction, healthcare, and other sectors where understanding human behavior is essential.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Video Intelligence Is Going Agentic

Video content has become ubiquitous in our digital world, yet the tools for working with video have remained largely unchanged for decades. This talk explores how the convergence of foundation models and agent architectures is fundamentally transforming video interaction and creation. We'll examine how video-native foundation models, multimodal interfaces, and agent transparency are reshaping enterprise media workflows through a deep dive into Jockey, a pioneering video agent system.

About the Speaker

James Le currently leads the developer experience function at TwelveLabs - a startup building foundation models for video understanding. He previously operated in the MLOps space and ran a blog/podcast on the Data & AI infrastructure ecosystem.

Feb 11 - Visual AI for Video Use Cases

Join our virtual Meetup to hear talks from experts on cutting-edge topics at the intersection of Visual AI and video use cases.

Time and Location

Feb 11, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

VIDEOP2R: Video Understanding from Perception to Reasoning

Reinforcement fine-tuning (RFT), a two-stage framework consisting of supervised fine-tuning (SFT) and reinforcement learning (RL) has shown promising results on improving reasoning ability of large language models (LLMs). Yet extending RFT to large video language models (LVLMs) remains challenging. We propose VideoP2R, a novel process-aware video RFT framework that enhances video reasoning by modeling perception and reasoning as distinct processes. In the SFT stage, we develop a three-step pipeline to generate VideoP2R-CoT-162K, a high-quality, process-aware chain-of-thought (CoT) dataset for perception and reasoning.

In the RL stage, we introduce a novel process-aware group relative policy optimization (PA-GRPO) algorithm that supplies separate rewards for perception and reasoning. Extensive experiments show that VideoP2R achieves state-of-the-art (SotA) performance on six out of seven video reasoning and understanding benchmarks. Ablation studies further confirm the effectiveness of our process-aware modeling and PA-GRPO and demonstrate that model's perception output is information-sufficient for downstream reasoning.

About the Speaker

Yifan Jiang is a third-year Ph.D. student in the Information Science Institute at the University of Southern California (USC-ISI), advised by Dr. Jay Pujara, focusing on natural language processing, commonsense reasoning and multimodality large language models.

Layer-Aware Video Composition via Split-then-Merge

Split-then-Merge (StM) is a novel generative framework that overcomes data scarcity in video composition by splitting unlabeled videos into separate foreground and background layers for self-supervised learning. By utilizing a transformation-aware training pipeline with multi-layer fusion, the model learns to realistically compose dynamic subjects into diverse scenes without relying on expensive annotated datasets. This presentation will cover the problem of video composition and the details of StM, an approach looking at this problem from a generative AI perspective. We will conclude by demonstrating how StM is working, and outperforming state-of-the-art methods in both quantitative benchmarks and qualitative evaluations.

About the Speaker

Ozgur Kara is a 4th year Computer Science PhD student at the University of Illinois Urbana-Champaign (UIUC), advised by Founder Professor James M. Rehg. His research builds the next generation of video AI by tackling three core challenges: efficiency, controllability, and safety.

Video Reasoning for Worker Safety

Ensuring worker safety in industrial environments requires more than object detection or motion tracking; it demands a genuine understanding of human actions, context, and risk. This talk demonstrates how NVIDIA Cosmos Reason, a multimodal video-reasoning model, interprets workplace scenarios with sophisticated temporal and semantic awareness, identifying nuanced safe and unsafe behaviors that conventional vision systems frequently overlook.

By integrating Cosmos Reason with FiftyOne, users achieve both automated safety assessments and transparent, interpretable explanations revealing why specific actions are deemed hazardous. Using a curated worker-safety dataset of authentic factory-floor footage, we show how video reasoning enhances audits, training, and compliance workflows while minimizing dependence on extensive labeled datasets. The resulting system demonstrates the potential of explainable multimodal AI to enable safer, more informed decision-making across manufacturing, logistics, construction, healthcare, and other sectors where understanding human behavior is essential.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Video Intelligence Is Going Agentic

Video content has become ubiquitous in our digital world, yet the tools for working with video have remained largely unchanged for decades. This talk explores how the convergence of foundation models and agent architectures is fundamentally transforming video interaction and creation. We'll examine how video-native foundation models, multimodal interfaces, and agent transparency are reshaping enterprise media workflows through a deep dive into Jockey, a pioneering video agent system.

About the Speaker

James Le currently leads the developer experience function at TwelveLabs - a startup building foundation models for video understanding. He previously operated in the MLOps space and ran a blog/podcast on the Data & AI infrastructure ecosystem.

Feb 11 - Visual AI for Video Use Cases

Join our virtual Meetup to hear talks from experts on cutting-edge topics at the intersection of Visual AI and video use cases.

Time and Location

Feb 11, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

VIDEOP2R: Video Understanding from Perception to Reasoning

Reinforcement fine-tuning (RFT), a two-stage framework consisting of supervised fine-tuning (SFT) and reinforcement learning (RL) has shown promising results on improving reasoning ability of large language models (LLMs). Yet extending RFT to large video language models (LVLMs) remains challenging. We propose VideoP2R, a novel process-aware video RFT framework that enhances video reasoning by modeling perception and reasoning as distinct processes. In the SFT stage, we develop a three-step pipeline to generate VideoP2R-CoT-162K, a high-quality, process-aware chain-of-thought (CoT) dataset for perception and reasoning.

In the RL stage, we introduce a novel process-aware group relative policy optimization (PA-GRPO) algorithm that supplies separate rewards for perception and reasoning. Extensive experiments show that VideoP2R achieves state-of-the-art (SotA) performance on six out of seven video reasoning and understanding benchmarks. Ablation studies further confirm the effectiveness of our process-aware modeling and PA-GRPO and demonstrate that model's perception output is information-sufficient for downstream reasoning.

About the Speaker

Yifan Jiang is a third-year Ph.D. student in the Information Science Institute at the University of Southern California (USC-ISI), advised by Dr. Jay Pujara, focusing on natural language processing, commonsense reasoning and multimodality large language models.

Layer-Aware Video Composition via Split-then-Merge

Split-then-Merge (StM) is a novel generative framework that overcomes data scarcity in video composition by splitting unlabeled videos into separate foreground and background layers for self-supervised learning. By utilizing a transformation-aware training pipeline with multi-layer fusion, the model learns to realistically compose dynamic subjects into diverse scenes without relying on expensive annotated datasets. This presentation will cover the problem of video composition and the details of StM, an approach looking at this problem from a generative AI perspective. We will conclude by demonstrating how StM is working, and outperforming state-of-the-art methods in both quantitative benchmarks and qualitative evaluations.

About the Speaker

Ozgur Kara is a 4th year Computer Science PhD student at the University of Illinois Urbana-Champaign (UIUC), advised by Founder Professor James M. Rehg. His research builds the next generation of video AI by tackling three core challenges: efficiency, controllability, and safety.

Video Reasoning for Worker Safety

Ensuring worker safety in industrial environments requires more than object detection or motion tracking; it demands a genuine understanding of human actions, context, and risk. This talk demonstrates how NVIDIA Cosmos Reason, a multimodal video-reasoning model, interprets workplace scenarios with sophisticated temporal and semantic awareness, identifying nuanced safe and unsafe behaviors that conventional vision systems frequently overlook.

By integrating Cosmos Reason with FiftyOne, users achieve both automated safety assessments and transparent, interpretable explanations revealing why specific actions are deemed hazardous. Using a curated worker-safety dataset of authentic factory-floor footage, we show how video reasoning enhances audits, training, and compliance workflows while minimizing dependence on extensive labeled datasets. The resulting system demonstrates the potential of explainable multimodal AI to enable safer, more informed decision-making across manufacturing, logistics, construction, healthcare, and other sectors where understanding human behavior is essential.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Video Intelligence Is Going Agentic

Video content has become ubiquitous in our digital world, yet the tools for working with video have remained largely unchanged for decades. This talk explores how the convergence of foundation models and agent architectures is fundamentally transforming video interaction and creation. We'll examine how video-native foundation models, multimodal interfaces, and agent transparency are reshaping enterprise media workflows through a deep dive into Jockey, a pioneering video agent system.

About the Speaker

James Le currently leads the developer experience function at TwelveLabs - a startup building foundation models for video understanding. He previously operated in the MLOps space and ran a blog/podcast on the Data & AI infrastructure ecosystem.

Feb 11 - Visual AI for Video Use Cases

Join our virtual Meetup to hear talks from experts on cutting-edge topics at the intersection of Visual AI and video use cases.

Time and Location

Feb 11, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

VIDEOP2R: Video Understanding from Perception to Reasoning

Reinforcement fine-tuning (RFT), a two-stage framework consisting of supervised fine-tuning (SFT) and reinforcement learning (RL) has shown promising results on improving reasoning ability of large language models (LLMs). Yet extending RFT to large video language models (LVLMs) remains challenging. We propose VideoP2R, a novel process-aware video RFT framework that enhances video reasoning by modeling perception and reasoning as distinct processes. In the SFT stage, we develop a three-step pipeline to generate VideoP2R-CoT-162K, a high-quality, process-aware chain-of-thought (CoT) dataset for perception and reasoning.

In the RL stage, we introduce a novel process-aware group relative policy optimization (PA-GRPO) algorithm that supplies separate rewards for perception and reasoning. Extensive experiments show that VideoP2R achieves state-of-the-art (SotA) performance on six out of seven video reasoning and understanding benchmarks. Ablation studies further confirm the effectiveness of our process-aware modeling and PA-GRPO and demonstrate that model's perception output is information-sufficient for downstream reasoning.

About the Speaker

Yifan Jiang is a third-year Ph.D. student in the Information Science Institute at the University of Southern California (USC-ISI), advised by Dr. Jay Pujara, focusing on natural language processing, commonsense reasoning and multimodality large language models.

Layer-Aware Video Composition via Split-then-Merge

Split-then-Merge (StM) is a novel generative framework that overcomes data scarcity in video composition by splitting unlabeled videos into separate foreground and background layers for self-supervised learning. By utilizing a transformation-aware training pipeline with multi-layer fusion, the model learns to realistically compose dynamic subjects into diverse scenes without relying on expensive annotated datasets. This presentation will cover the problem of video composition and the details of StM, an approach looking at this problem from a generative AI perspective. We will conclude by demonstrating how StM is working, and outperforming state-of-the-art methods in both quantitative benchmarks and qualitative evaluations.

About the Speaker

Ozgur Kara is a 4th year Computer Science PhD student at the University of Illinois Urbana-Champaign (UIUC), advised by Founder Professor James M. Rehg. His research builds the next generation of video AI by tackling three core challenges: efficiency, controllability, and safety.

Video Reasoning for Worker Safety

Ensuring worker safety in industrial environments requires more than object detection or motion tracking; it demands a genuine understanding of human actions, context, and risk. This talk demonstrates how NVIDIA Cosmos Reason, a multimodal video-reasoning model, interprets workplace scenarios with sophisticated temporal and semantic awareness, identifying nuanced safe and unsafe behaviors that conventional vision systems frequently overlook.

By integrating Cosmos Reason with FiftyOne, users achieve both automated safety assessments and transparent, interpretable explanations revealing why specific actions are deemed hazardous. Using a curated worker-safety dataset of authentic factory-floor footage, we show how video reasoning enhances audits, training, and compliance workflows while minimizing dependence on extensive labeled datasets. The resulting system demonstrates the potential of explainable multimodal AI to enable safer, more informed decision-making across manufacturing, logistics, construction, healthcare, and other sectors where understanding human behavior is essential.

About the Speaker

Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia.

Video Intelligence Is Going Agentic

Video content has become ubiquitous in our digital world, yet the tools for working with video have remained largely unchanged for decades. This talk explores how the convergence of foundation models and agent architectures is fundamentally transforming video interaction and creation. We'll examine how video-native foundation models, multimodal interfaces, and agent transparency are reshaping enterprise media workflows through a deep dive into Jockey, a pioneering video agent system.

About the Speaker

James Le currently leads the developer experience function at TwelveLabs - a startup building foundation models for video understanding. He previously operated in the MLOps space and ran a blog/podcast on the Data & AI infrastructure ecosystem.

Feb 11 - Visual AI for Video Use Cases

Alright Bristol, we’re finally here.

We’re buzzing to launch the Databricks Bristol Meetup and get the Bristol Databricks community together. Huge thanks to iO Associates for being our host and cheerleader in Bristol, and to Advancing Analytics and Databricks for backing the group and helping us build something that’s genuinely useful for practitioners.

Expect a relaxed evening of good people, practical Databricks chat, and proper networking (the kind where you actually meet people, not just collect LinkedIn connections you’ll never speak to again). Whether you’re deep in Spark, living your best Lakehouse life, or just Databricks-curious, you’re very welcome.

What to expect

  • A friendly, community-first meetup (no sales, no nonsense)
  • A quick Databricks “what’s new” segment to kick things off
  • A keynote from Databricks
  • Drinks, pizza, and plenty of time to chat with other data folks

Agenda 5.30pm – Arrival + networking drinks 6.00pm – Introduction from Simon Whiteley, followed by What’s new in Databricks 6.30pm – Keynote from Holly Smith (Databricks Staff Developer Advocate) 7.00pm – Drinks + pizza + networking

Venue Hosted at iO Associates, Bristol. (St Bartholomew's House, Bristol BS1 2NH) Spaces will be limited, so please RSVP. If you can’t make it, do update your RSVP so someone else can grab your spot.

Who should come?

  • Data engineers, analysts, scientists, platform folks, architects
  • People building on Databricks (or evaluating it)
  • Anyone into modern data + AI and learning from others doing it for real
Bristol Databricks Meetup: Feb 2026