talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (102 results)

See all 102 →

Companies (1 result)

Microsoft MVP | Community Manager

Activities & events

Title & Speakers Event

Register here 👉 https://bettermile.com/last-mile-meetup-event

🚚 Last Mile Delivery Meetup #11

📌 About the Meetup Join industry leaders and innovators at Last Mile Delivery Meetup #11, where we explore ways to overcome transformation barriers and advance to the next phase, which is being driven by the wave of AI. Hear inspiring success stories and learn about practical solutions that are reshaping last-mile logistics today. This event offers a unique opportunity to connect with experts, learn from them, and engage in conversations that are shaping the future of parcel delivery through cutting-edge technology and innovation.

📅 27th of January, 2026, 6:00 PM 📍 Join us Online or in Better Space, Berlin

Note: Photographs and video recordings will be made during the event for documentation and promotional purposes. If you prefer not to appear in any visual material, you can participate online without being recorded.

LastMileDeliveryMeetup #Logistics

Last Mile Delivery Meetup #11
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI

Are you working in artificial intelligence, data science, or machine learning? Whether you're a developer, researcher, startup founder, or aspiring tech enthusiast — AIDataTech Connect is your go-to AI networking event online designed to foster career growth, real-time collaboration, and innovation exchange in a dynamic, community-driven environment.

This event brings together a global virtual AI community meetup hosted on a flexible Slack-style platform that supports group discussions and 1-1 video or text chats, making it perfect for professionals looking to connect, collaborate, and build with others in the AI ecosystem.

🎥 Watch how it works: 👉 https://www.youtube.com/watch?v=NRUTXUOFKm4

💬 What to Expect: At AIDataTech Connect, attendees are encouraged to explore, share, and engage across multiple interactive channels designed to support learning, networking, and opportunity discovery:

  • General – A central hub for orientation, announcements, and community-wide discussions.
  • Intros – Introduce yourself, your background, and your AI/tech journey. Spark meaningful introductions.
  • Networking – Share your resume, startup, or project pitch. Meet employers, peers, or collaborators.
  • Help Wanted – Ask for help or offer support on coding, data, research, or AI model deployment.
  • Industry Room Tech – Discuss AI frameworks, tools, and the latest tech trends in data and development.
  • Project Showcase – Perfect for those seeking a machine learning career fair experience or showcasing real-world work through our AI project showcase platform.
  • Collaboration Pods – Jump into discussions around collaboration for AI specialists, including open-source partnerships, research pods, and co-building opportunities.

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9 🕔 When: Event runs globally from 5:00 PM – 7:00 PM local time — designed for participation across time zones.

👥 Who Should Join:

  • AI professionals, ML engineers, and data scientists
  • Tech founders, entrepreneurs, and product leaders
  • Students, job seekers, and aspiring AI talent
  • Recruiters, employers, and hiring managers in AI/tech
  • Researchers, open-source contributors, and peer mentors

🧠 Why Attend:

  • Engage in a truly global virtual tech networking event
  • Discover AI jobs, freelance gigs, or partnership opportunities
  • Connect with experts and peers in AI, ML, and data
  • Explore AI applications and get feedback on your models or tools
  • Network directly via 1-on-1 video or text chats
  • Collaborate on new ideas, research papers, or open-source projects

❓ Questions or Suggestions? We’re here to help. Drop us a note at: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIDataTechConnect #AINetworkingEventOnline #VirtualAICommunityMeetup #MachineLearningCareerFair #AIProjectShowcasePlatform #CollaborationForAISpecialists #DataScienceMeetup #MLJobs #AIResearchNetworking #GlobalTechNetworking #TechCollaborationEvent

AIDataTech Connect: Global Virtual Networking Event for AI, Data & Tech Pros
Jan 22 - Women in AI 2026-01-22 · 17:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI

Are you working in artificial intelligence, data science, or machine learning? Whether you're a developer, researcher, startup founder, or aspiring tech enthusiast — AIDataTech Connect is your go-to AI networking event online designed to foster career growth, real-time collaboration, and innovation exchange in a dynamic, community-driven environment.

This event brings together a global virtual AI community meetup hosted on a flexible Slack-style platform that supports group discussions and 1-1 video or text chats, making it perfect for professionals looking to connect, collaborate, and build with others in the AI ecosystem.

🎥 Watch how it works: 👉 https://www.youtube.com/watch?v=NRUTXUOFKm4

💬 What to Expect: At AIDataTech Connect, attendees are encouraged to explore, share, and engage across multiple interactive channels designed to support learning, networking, and opportunity discovery:

  • General – A central hub for orientation, announcements, and community-wide discussions.
  • Intros – Introduce yourself, your background, and your AI/tech journey. Spark meaningful introductions.
  • Networking – Share your resume, startup, or project pitch. Meet employers, peers, or collaborators.
  • Help Wanted – Ask for help or offer support on coding, data, research, or AI model deployment.
  • Industry Room Tech – Discuss AI frameworks, tools, and the latest tech trends in data and development.
  • Project Showcase – Perfect for those seeking a machine learning career fair experience or showcasing real-world work through our AI project showcase platform.
  • Collaboration Pods – Jump into discussions around collaboration for AI specialists, including open-source partnerships, research pods, and co-building opportunities.

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9 🕔 When: Event runs globally from 5:00 PM – 7:00 PM local time — designed for participation across time zones.

👥 Who Should Join:

  • AI professionals, ML engineers, and data scientists
  • Tech founders, entrepreneurs, and product leaders
  • Students, job seekers, and aspiring AI talent
  • Recruiters, employers, and hiring managers in AI/tech
  • Researchers, open-source contributors, and peer mentors

🧠 Why Attend:

  • Engage in a truly global virtual tech networking event
  • Discover AI jobs, freelance gigs, or partnership opportunities
  • Connect with experts and peers in AI, ML, and data
  • Explore AI applications and get feedback on your models or tools
  • Network directly via 1-on-1 video or text chats
  • Collaborate on new ideas, research papers, or open-source projects

❓ Questions or Suggestions? We’re here to help. Drop us a note at: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIDataTechConnect #AINetworkingEventOnline #VirtualAICommunityMeetup #MachineLearningCareerFair #AIProjectShowcasePlatform #CollaborationForAISpecialists #DataScienceMeetup #MLJobs #AIResearchNetworking #GlobalTechNetworking #TechCollaborationEvent

AIDataTech Connect: Global Virtual Networking Event for AI, Data & Tech Pros
Jan 22 - Women in AI 2026-01-22 · 17:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 17:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI

Are you working in artificial intelligence, data science, or machine learning? Whether you're a developer, researcher, startup founder, or aspiring tech enthusiast — AIDataTech Connect is your go-to AI networking event online designed to foster career growth, real-time collaboration, and innovation exchange in a dynamic, community-driven environment.

This event brings together a global virtual AI community meetup hosted on a flexible Slack-style platform that supports group discussions and 1-1 video or text chats, making it perfect for professionals looking to connect, collaborate, and build with others in the AI ecosystem.

🎥 Watch how it works: 👉 https://www.youtube.com/watch?v=NRUTXUOFKm4

💬 What to Expect: At AIDataTech Connect, attendees are encouraged to explore, share, and engage across multiple interactive channels designed to support learning, networking, and opportunity discovery:

  • General – A central hub for orientation, announcements, and community-wide discussions.
  • Intros – Introduce yourself, your background, and your AI/tech journey. Spark meaningful introductions.
  • Networking – Share your resume, startup, or project pitch. Meet employers, peers, or collaborators.
  • Help Wanted – Ask for help or offer support on coding, data, research, or AI model deployment.
  • Industry Room Tech – Discuss AI frameworks, tools, and the latest tech trends in data and development.
  • Project Showcase – Perfect for those seeking a machine learning career fair experience or showcasing real-world work through our AI project showcase platform.
  • Collaboration Pods – Jump into discussions around collaboration for AI specialists, including open-source partnerships, research pods, and co-building opportunities.

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9 🕔 When: Event runs globally from 5:00 PM – 7:00 PM local time — designed for participation across time zones.

👥 Who Should Join:

  • AI professionals, ML engineers, and data scientists
  • Tech founders, entrepreneurs, and product leaders
  • Students, job seekers, and aspiring AI talent
  • Recruiters, employers, and hiring managers in AI/tech
  • Researchers, open-source contributors, and peer mentors

🧠 Why Attend:

  • Engage in a truly global virtual tech networking event
  • Discover AI jobs, freelance gigs, or partnership opportunities
  • Connect with experts and peers in AI, ML, and data
  • Explore AI applications and get feedback on your models or tools
  • Network directly via 1-on-1 video or text chats
  • Collaborate on new ideas, research papers, or open-source projects

❓ Questions or Suggestions? We’re here to help. Drop us a note at: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIDataTechConnect #AINetworkingEventOnline #VirtualAICommunityMeetup #MachineLearningCareerFair #AIProjectShowcasePlatform #CollaborationForAISpecialists #DataScienceMeetup #MLJobs #AIResearchNetworking #GlobalTechNetworking #TechCollaborationEvent

AIDataTech Connect: Global Virtual Networking Event for AI, Data & Tech Pros

Are you working in artificial intelligence, data science, or machine learning? Whether you're a developer, researcher, startup founder, or aspiring tech enthusiast — AIDataTech Connect is your go-to AI networking event online designed to foster career growth, real-time collaboration, and innovation exchange in a dynamic, community-driven environment.

This event brings together a global virtual AI community meetup hosted on a flexible Slack-style platform that supports group discussions and 1-1 video or text chats, making it perfect for professionals looking to connect, collaborate, and build with others in the AI ecosystem.

🎥 Watch how it works: 👉 https://www.youtube.com/watch?v=NRUTXUOFKm4

💬 What to Expect: At AIDataTech Connect, attendees are encouraged to explore, share, and engage across multiple interactive channels designed to support learning, networking, and opportunity discovery:

  • General – A central hub for orientation, announcements, and community-wide discussions.
  • Intros – Introduce yourself, your background, and your AI/tech journey. Spark meaningful introductions.
  • Networking – Share your resume, startup, or project pitch. Meet employers, peers, or collaborators.
  • Help Wanted – Ask for help or offer support on coding, data, research, or AI model deployment.
  • Industry Room Tech – Discuss AI frameworks, tools, and the latest tech trends in data and development.
  • Project Showcase – Perfect for those seeking a machine learning career fair experience or showcasing real-world work through our AI project showcase platform.
  • Collaboration Pods – Jump into discussions around collaboration for AI specialists, including open-source partnerships, research pods, and co-building opportunities.

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9 🕔 When: Event runs globally from 5:00 PM – 7:00 PM local time — designed for participation across time zones.

👥 Who Should Join:

  • AI professionals, ML engineers, and data scientists
  • Tech founders, entrepreneurs, and product leaders
  • Students, job seekers, and aspiring AI talent
  • Recruiters, employers, and hiring managers in AI/tech
  • Researchers, open-source contributors, and peer mentors

🧠 Why Attend:

  • Engage in a truly global virtual tech networking event
  • Discover AI jobs, freelance gigs, or partnership opportunities
  • Connect with experts and peers in AI, ML, and data
  • Explore AI applications and get feedback on your models or tools
  • Network directly via 1-on-1 video or text chats
  • Collaborate on new ideas, research papers, or open-source projects

❓ Questions or Suggestions? We’re here to help. Drop us a note at: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIDataTechConnect #AINetworkingEventOnline #VirtualAICommunityMeetup #MachineLearningCareerFair #AIProjectShowcasePlatform #CollaborationForAISpecialists #DataScienceMeetup #MLJobs #AIResearchNetworking #GlobalTechNetworking #TechCollaborationEvent

AIDataTech Connect: Global Virtual Networking Event for AI, Data & Tech Pros

Are you working in artificial intelligence, data science, or machine learning? Whether you're a developer, researcher, startup founder, or aspiring tech enthusiast — AIDataTech Connect is your go-to AI networking event online designed to foster career growth, real-time collaboration, and innovation exchange in a dynamic, community-driven environment.

This event brings together a global virtual AI community meetup hosted on a flexible Slack-style platform that supports group discussions and 1-1 video or text chats, making it perfect for professionals looking to connect, collaborate, and build with others in the AI ecosystem.

🎥 Watch how it works: 👉 https://www.youtube.com/watch?v=NRUTXUOFKm4

💬 What to Expect: At AIDataTech Connect, attendees are encouraged to explore, share, and engage across multiple interactive channels designed to support learning, networking, and opportunity discovery:

  • General – A central hub for orientation, announcements, and community-wide discussions.
  • Intros – Introduce yourself, your background, and your AI/tech journey. Spark meaningful introductions.
  • Networking – Share your resume, startup, or project pitch. Meet employers, peers, or collaborators.
  • Help Wanted – Ask for help or offer support on coding, data, research, or AI model deployment.
  • Industry Room Tech – Discuss AI frameworks, tools, and the latest tech trends in data and development.
  • Project Showcase – Perfect for those seeking a machine learning career fair experience or showcasing real-world work through our AI project showcase platform.
  • Collaboration Pods – Jump into discussions around collaboration for AI specialists, including open-source partnerships, research pods, and co-building opportunities.

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9 🕔 When: Event runs globally from 5:00 PM – 7:00 PM local time — designed for participation across time zones.

👥 Who Should Join:

  • AI professionals, ML engineers, and data scientists
  • Tech founders, entrepreneurs, and product leaders
  • Students, job seekers, and aspiring AI talent
  • Recruiters, employers, and hiring managers in AI/tech
  • Researchers, open-source contributors, and peer mentors

🧠 Why Attend:

  • Engage in a truly global virtual tech networking event
  • Discover AI jobs, freelance gigs, or partnership opportunities
  • Connect with experts and peers in AI, ML, and data
  • Explore AI applications and get feedback on your models or tools
  • Network directly via 1-on-1 video or text chats
  • Collaborate on new ideas, research papers, or open-source projects

❓ Questions or Suggestions? We’re here to help. Drop us a note at: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIDataTechConnect #AINetworkingEventOnline #VirtualAICommunityMeetup #MachineLearningCareerFair #AIProjectShowcasePlatform #CollaborationForAISpecialists #DataScienceMeetup #MLJobs #AIResearchNetworking #GlobalTechNetworking #TechCollaborationEvent

AIDataTech Connect: Global Virtual Networking Event for AI, Data & Tech Pros

Are you working in artificial intelligence, data science, or machine learning? Whether you're a developer, researcher, startup founder, or aspiring tech enthusiast — AIDataTech Connect is your go-to AI networking event online designed to foster career growth, real-time collaboration, and innovation exchange in a dynamic, community-driven environment.

This event brings together a global virtual AI community meetup hosted on a flexible Slack-style platform that supports group discussions and 1-1 video or text chats, making it perfect for professionals looking to connect, collaborate, and build with others in the AI ecosystem.

🎥 Watch how it works: 👉 https://www.youtube.com/watch?v=NRUTXUOFKm4

💬 What to Expect: At AIDataTech Connect, attendees are encouraged to explore, share, and engage across multiple interactive channels designed to support learning, networking, and opportunity discovery:

  • General – A central hub for orientation, announcements, and community-wide discussions.
  • Intros – Introduce yourself, your background, and your AI/tech journey. Spark meaningful introductions.
  • Networking – Share your resume, startup, or project pitch. Meet employers, peers, or collaborators.
  • Help Wanted – Ask for help or offer support on coding, data, research, or AI model deployment.
  • Industry Room Tech – Discuss AI frameworks, tools, and the latest tech trends in data and development.
  • Project Showcase – Perfect for those seeking a machine learning career fair experience or showcasing real-world work through our AI project showcase platform.
  • Collaboration Pods – Jump into discussions around collaboration for AI specialists, including open-source partnerships, research pods, and co-building opportunities.

📍 Where: Join virtually at: 👉 https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9 🕔 When: Event runs globally from 5:00 PM – 7:00 PM local time — designed for participation across time zones.

👥 Who Should Join:

  • AI professionals, ML engineers, and data scientists
  • Tech founders, entrepreneurs, and product leaders
  • Students, job seekers, and aspiring AI talent
  • Recruiters, employers, and hiring managers in AI/tech
  • Researchers, open-source contributors, and peer mentors

🧠 Why Attend:

  • Engage in a truly global virtual tech networking event
  • Discover AI jobs, freelance gigs, or partnership opportunities
  • Connect with experts and peers in AI, ML, and data
  • Explore AI applications and get feedback on your models or tools
  • Network directly via 1-on-1 video or text chats
  • Collaborate on new ideas, research papers, or open-source projects

❓ Questions or Suggestions? We’re here to help. Drop us a note at: 👉 https://noworkerleftbehind.org/event_support

🔖 Hashtags:

AIDataTechConnect #AINetworkingEventOnline #VirtualAICommunityMeetup #MachineLearningCareerFair #AIProjectShowcasePlatform #CollaborationForAISpecialists #DataScienceMeetup #MLJobs #AIResearchNetworking #GlobalTechNetworking #TechCollaborationEvent

AIDataTech Connect: Global Virtual Networking Event for AI, Data & Tech Pros
Kyle Stratis 2026-01-21 · 19:00
Kyle Stratis – Founder @ Stratis Data Labs

Speaker: Kyle Stratis, Founder at Stratis Data Labs

Dr. Ali Arsanjani 2026-01-21 · 19:00
Dr. Ali Arsanjani – Director of Applied AI Engineering; Head of AI Center of Excellence @ Google Cloud

Speaker: Dr. Ali Arsanjani, Director of Applied AI Engineering; Head of AI Center of Excellence at Google Cloud

AI/ML Cloud Computing GCP
Sanyam Bhutani 2026-01-21 · 19:00
Sanyam Bhutani – Partner Engineer, Generative AI Engineer @ Meta

Speaker: Sanyam Bhutani, Partner Engineer, Generative AI Engineer at Meta

AI/ML GenAI
Cameron Royce Turner 2026-01-21 · 19:00
Cameron Royce Turner – Founder and CEO @ TRUIFY.AI

Speaker: Cameron Royce Turner, Founder and CEO at TRUIFY.AI

AI/ML
Claire Longo 2026-01-21 · 19:00
Claire Longo – AI Researcher @ Comet

Speaker: Claire Longo, AI Researcher at Comet

AI/ML