talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (170 results)

See all 170 →

Companies (1 result)

Captain Kapitän

Activities & events

Title & Speakers Event
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI

Fabric Data Days is back! In this session, we’ll explore how AI is reshaping the analyst’s toolkit and opening the door to deeper insights, faster workflows, and more strategic impact.

You’ll learn how to harness AI-powered capabilities to streamline tasks, enhance your analysis, and focus more on the work that truly requires human judgment.

We’ll cover the skills to invest in, the tools to adopt, and the practical ways AI can make you a more effective and future-ready analyst.

Learn more about the series here!

Being a data analyst in the era of AI (APAC)

About this Event Tech is moving fast, and AI is changing the way we work. Whether you're a pro looking to grow, a business looking for talent, or just someone trying to keep up, this event is for you.

Time: 5:00 PM – 7:00 PM (Local Time) Location: Register and join from this platform - https://events.tao.ai/pod/analytics.club/q4j5imq9qjs9/source--me There is no separate link to join the session

Why Join? Meet the Right People – Connect with AI, data, and tech pros who are in the same boat as you.

Talk About What Matters – No boring speeches—just real talk about how AI is shaping careers, businesses, and industries.

Open Doors – Whether it’s a new job, a business connection, or an idea that sparks something big, this is where it starts.

No Worker Left Behind is bringing together professionals and businesses in networking rooms designed to spark real conversations, new collaborations, and even job opportunities.

This isn’t just another networking event. It’s a chance to build something meaningful—together.

Spots are limited—sign up now! Let’s shape the future of work—one conversation at a time. See you there!

About the Host No Worker Left Behind (NWLB) is committed to fostering professional growth and collaboration by creating networking spaces for healthcare and nursing professionals. Our mission is to ensure that professionals have access to connections, knowledge, and community support to thrive in their careers.

If any questions or suggestions please drop us a note at: https://noworkerleftbehind.org/event_support

#AI #Data and #TechProfessionalNetworking
Jan 22 - Women in AI 2026-01-22 · 17:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 17:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Data & AI Paris Meetup 2026-01-22 · 17:00

​Join us and the organizers of Devworld Conference for an evening focused on Data & AI, bringing together professionals from Aerospike, ClickHouse, Databricks and Datadog working with modern AI tooling.

​We’ll explore real-world use cases, share practical workflows, and discuss how teams are integrating AI into their data and product stacks.

​Expect short talks, demos, and plenty of time for networking with others shaping the future of AI development.

Program & Location to be announced soon!

Data & AI Paris Meetup
Jan 22 - Women in AI 2026-01-22 · 17:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI

Fabric Data Days is back! In this session, we’ll explore how AI is reshaping the analyst’s toolkit and opening the door to deeper insights, faster workflows, and more strategic impact.

You’ll learn how to harness AI-powered capabilities to streamline tasks, enhance your analysis, and focus more on the work that truly requires human judgment.

We’ll cover the skills to invest in, the tools to adopt, and the practical ways AI can make you a more effective and future-ready analyst.

Learn more about the series here!

Being a data analyst in the era of AI (AMER/EMEA)

Fabric Data Days is back! In this session, we’ll explore how AI is reshaping the analyst’s toolkit and opening the door to deeper insights, faster workflows, and more strategic impact.

You’ll learn how to harness AI-powered capabilities to streamline tasks, enhance your analysis, and focus more on the work that truly requires human judgment.

We’ll cover the skills to invest in, the tools to adopt, and the practical ways AI can make you a more effective and future-ready analyst.

Learn more about the series here!

Being a data analyst in the era of AI (AMER/EMEA)

Fabric Data Days is back! In this session, we’ll explore how AI is reshaping the analyst’s toolkit and opening the door to deeper insights, faster workflows, and more strategic impact.

You’ll learn how to harness AI-powered capabilities to streamline tasks, enhance your analysis, and focus more on the work that truly requires human judgment.

We’ll cover the skills to invest in, the tools to adopt, and the practical ways AI can make you a more effective and future-ready analyst.

Learn more about the series here!

Being a data analyst in the era of AI (AMER/EMEA)

AI Data Center Network Design and Technologies Designing the Networks that Power the AI Revolution Artificial intelligence is transforming the modern data center. Training large-scale machine learning models requires infrastructure that can move massive datasets at lightning speed-far beyond the capabilities of traditional architectures. AI Data Center Network Design and Technologies is the first comprehensive, vendor-neutral guide to building and optimizing networks purpose-built for AI workloads. Written by leading experts in AI data center design, this book bridges the gap between network engineering and AI infrastructure-helping you understand how to design, scale, and future-proof high-performance environments for training and inference. What You'll Learn Architect for scale: Build high-radix network fabrics to support GPU, TPU, and xPU-based AI clusters Optimize data movement: Integrate lossless Ethernet/IP fabrics for high-throughput, low-latency communication Design with purpose: Align network design to AI/ML workload patterns and server architectures Plan for the physical layer: Address cooling, power, and interconnect challenges at AI scale Stay ahead of innovation: Explore emerging standards from the Ultra Ethernet Consortium (UEC) Validate performance: Apply proven deployment, testing, and measurement best practices Why Read This Book AI is redefining what data centers can-and must-do. Whether you're a network engineer, architect, or technology leader, this book provides the technical foundation and forward-looking insights you need to design next-generation networks optimized for AI-scale computing. .

data ai-ml artificial-intelligence-ai artificial intelligence (ai) AI/ML
O'Reilly AI & ML Books

Join a global, open-format gathering for those building and questioning AI — from real-world use cases to critical conversations around ethics, bias, and transparency. Every month, we create space for developers, researchers, and policymakers to connect in meaningful, unmoderated dialogue — with flexible chats, peer learning, and 1-on-1 video networking. Watch how it works: https://www.youtube.com/watch?v=bOYkxBjrAYY What Is the Applied AI & Ethics #MonthlyMixer? The Applied AI & Ethics MonthlyMixer is a self-directed online networking event where professionals from across the AI spectrum come together to collaborate, question, and connect. This isn’t a webinar or panel — it’s a structured-free, chat-first environment hosted on a Slack-like virtual platform, featuring topic-specific channels and private video or text chat options. Whether you're designing algorithms, drafting policy, or advocating for responsible tech, this is your space to exchange ideas, showcase projects, or meet others who care about building AI with integrity and impact. Who Should Join:

  • AI/ML developers working on real-world deployments
  • Policy makers and legal experts exploring AI governance
  • Data scientists interested in ethical model design
  • Academics and researchers studying AI and society
  • Ethicists, community leaders, and social impact advocates
  • Product leaders balancing innovation with responsibility

Whether you're launching applied AI tools or questioning their societal effects, this mixer welcomes your perspective. Explore These Channels Inside: The platform provides themed group discussions to help you navigate the topics that matter most:

  • general – Start with introductions, announcements, and cross-topic conversation

  • networking – Share your portfolio, white papers, startups, or ethical AI tools

  • intros – Tell your story: what you work on and why it matters

  • help-wanted – Request or offer guidance around data ethics, fairness, or responsible deployment

  • industry-room-tech – Dive deep into real-world cases, AI misuse, explainability, and bias mitigation

You’ll also be able to chat directly and launch 1-on-1 video calls for deeper conversations. When & Where?

No login hurdles. No presentations. Just people-driven dialogue. Why Join?

  • Global gathering of AI professionals shaping the future responsibly
  • Self-led chats & 1-on-1 networking with values-aligned peers
  • Exchange insights, frameworks, and use cases for applied AI
  • Build your ethical AI toolkit and collaborative network
  • Reconnect monthly to deepen the conversation

Questions or Suggestions? We’re always listening. Contact us anytime at: https://noworkerleftbehind.org/event_support Quick Links:

Hashtags for Discovery & Engagement

AppliedAI #EthicalAI #AIEthics #ResponsibleTech #MonthlyMixer #AIGovernance #AIandSociety #FairAI #BiasInAI #TechForGood #AINetworking #AIProfessionals #GlobalAICommunity

Applied AI Ethics MonthlyMixer - Where Innovation Meets Responsibility

Join a global, open-format gathering for those building and questioning AI — from real-world use cases to critical conversations around ethics, bias, and transparency. Every month, we create space for developers, researchers, and policymakers to connect in meaningful, unmoderated dialogue — with flexible chats, peer learning, and 1-on-1 video networking. Watch how it works: https://www.youtube.com/watch?v=bOYkxBjrAYY What Is the Applied AI & Ethics #MonthlyMixer? The Applied AI & Ethics MonthlyMixer is a self-directed online networking event where professionals from across the AI spectrum come together to collaborate, question, and connect. This isn’t a webinar or panel — it’s a structured-free, chat-first environment hosted on a Slack-like virtual platform, featuring topic-specific channels and private video or text chat options. Whether you're designing algorithms, drafting policy, or advocating for responsible tech, this is your space to exchange ideas, showcase projects, or meet others who care about building AI with integrity and impact. Who Should Join:

  • AI/ML developers working on real-world deployments
  • Policy makers and legal experts exploring AI governance
  • Data scientists interested in ethical model design
  • Academics and researchers studying AI and society
  • Ethicists, community leaders, and social impact advocates
  • Product leaders balancing innovation with responsibility

Whether you're launching applied AI tools or questioning their societal effects, this mixer welcomes your perspective. Explore These Channels Inside: The platform provides themed group discussions to help you navigate the topics that matter most:

  • general – Start with introductions, announcements, and cross-topic conversation

  • networking – Share your portfolio, white papers, startups, or ethical AI tools

  • intros – Tell your story: what you work on and why it matters

  • help-wanted – Request or offer guidance around data ethics, fairness, or responsible deployment

  • industry-room-tech – Dive deep into real-world cases, AI misuse, explainability, and bias mitigation

You’ll also be able to chat directly and launch 1-on-1 video calls for deeper conversations. When & Where?

No login hurdles. No presentations. Just people-driven dialogue. Why Join?

  • Global gathering of AI professionals shaping the future responsibly
  • Self-led chats & 1-on-1 networking with values-aligned peers
  • Exchange insights, frameworks, and use cases for applied AI
  • Build your ethical AI toolkit and collaborative network
  • Reconnect monthly to deepen the conversation

Questions or Suggestions? We’re always listening. Contact us anytime at: https://noworkerleftbehind.org/event_support Quick Links:

Hashtags for Discovery & Engagement

AppliedAI #EthicalAI #AIEthics #ResponsibleTech #MonthlyMixer #AIGovernance #AIandSociety #FairAI #BiasInAI #TechForGood #AINetworking #AIProfessionals #GlobalAICommunity

Applied AI Ethics MonthlyMixer - Where Innovation Meets Responsibility

Join a global, open-format gathering for those building and questioning AI — from real-world use cases to critical conversations around ethics, bias, and transparency. Every month, we create space for developers, researchers, and policymakers to connect in meaningful, unmoderated dialogue — with flexible chats, peer learning, and 1-on-1 video networking. Watch how it works: https://www.youtube.com/watch?v=bOYkxBjrAYY What Is the Applied AI & Ethics #MonthlyMixer? The Applied AI & Ethics MonthlyMixer is a self-directed online networking event where professionals from across the AI spectrum come together to collaborate, question, and connect. This isn’t a webinar or panel — it’s a structured-free, chat-first environment hosted on a Slack-like virtual platform, featuring topic-specific channels and private video or text chat options. Whether you're designing algorithms, drafting policy, or advocating for responsible tech, this is your space to exchange ideas, showcase projects, or meet others who care about building AI with integrity and impact. Who Should Join:

  • AI/ML developers working on real-world deployments
  • Policy makers and legal experts exploring AI governance
  • Data scientists interested in ethical model design
  • Academics and researchers studying AI and society
  • Ethicists, community leaders, and social impact advocates
  • Product leaders balancing innovation with responsibility

Whether you're launching applied AI tools or questioning their societal effects, this mixer welcomes your perspective. Explore These Channels Inside: The platform provides themed group discussions to help you navigate the topics that matter most:

  • general – Start with introductions, announcements, and cross-topic conversation

  • networking – Share your portfolio, white papers, startups, or ethical AI tools

  • intros – Tell your story: what you work on and why it matters

  • help-wanted – Request or offer guidance around data ethics, fairness, or responsible deployment

  • industry-room-tech – Dive deep into real-world cases, AI misuse, explainability, and bias mitigation

You’ll also be able to chat directly and launch 1-on-1 video calls for deeper conversations. When & Where?

No login hurdles. No presentations. Just people-driven dialogue. Why Join?

  • Global gathering of AI professionals shaping the future responsibly
  • Self-led chats & 1-on-1 networking with values-aligned peers
  • Exchange insights, frameworks, and use cases for applied AI
  • Build your ethical AI toolkit and collaborative network
  • Reconnect monthly to deepen the conversation

Questions or Suggestions? We’re always listening. Contact us anytime at: https://noworkerleftbehind.org/event_support Quick Links:

Hashtags for Discovery & Engagement

AppliedAI #EthicalAI #AIEthics #ResponsibleTech #MonthlyMixer #AIGovernance #AIandSociety #FairAI #BiasInAI #TechForGood #AINetworking #AIProfessionals #GlobalAICommunity

Applied AI Ethics MonthlyMixer - Where Innovation Meets Responsibility

Join a global, open-format gathering for those building and questioning AI — from real-world use cases to critical conversations around ethics, bias, and transparency. Every month, we create space for developers, researchers, and policymakers to connect in meaningful, unmoderated dialogue — with flexible chats, peer learning, and 1-on-1 video networking. Watch how it works: https://www.youtube.com/watch?v=bOYkxBjrAYY What Is the Applied AI & Ethics #MonthlyMixer? The Applied AI & Ethics MonthlyMixer is a self-directed online networking event where professionals from across the AI spectrum come together to collaborate, question, and connect. This isn’t a webinar or panel — it’s a structured-free, chat-first environment hosted on a Slack-like virtual platform, featuring topic-specific channels and private video or text chat options. Whether you're designing algorithms, drafting policy, or advocating for responsible tech, this is your space to exchange ideas, showcase projects, or meet others who care about building AI with integrity and impact. Who Should Join:

  • AI/ML developers working on real-world deployments
  • Policy makers and legal experts exploring AI governance
  • Data scientists interested in ethical model design
  • Academics and researchers studying AI and society
  • Ethicists, community leaders, and social impact advocates
  • Product leaders balancing innovation with responsibility

Whether you're launching applied AI tools or questioning their societal effects, this mixer welcomes your perspective. Explore These Channels Inside: The platform provides themed group discussions to help you navigate the topics that matter most:

  • general – Start with introductions, announcements, and cross-topic conversation

  • networking – Share your portfolio, white papers, startups, or ethical AI tools

  • intros – Tell your story: what you work on and why it matters

  • help-wanted – Request or offer guidance around data ethics, fairness, or responsible deployment

  • industry-room-tech – Dive deep into real-world cases, AI misuse, explainability, and bias mitigation

You’ll also be able to chat directly and launch 1-on-1 video calls for deeper conversations. When & Where?

No login hurdles. No presentations. Just people-driven dialogue. Why Join?

  • Global gathering of AI professionals shaping the future responsibly
  • Self-led chats & 1-on-1 networking with values-aligned peers
  • Exchange insights, frameworks, and use cases for applied AI
  • Build your ethical AI toolkit and collaborative network
  • Reconnect monthly to deepen the conversation

Questions or Suggestions? We’re always listening. Contact us anytime at: https://noworkerleftbehind.org/event_support Quick Links:

Hashtags for Discovery & Engagement

AppliedAI #EthicalAI #AIEthics #ResponsibleTech #MonthlyMixer #AIGovernance #AIandSociety #FairAI #BiasInAI #TechForGood #AINetworking #AIProfessionals #GlobalAICommunity

Applied AI Ethics MonthlyMixer - Where Innovation Meets Responsibility

Join a global, open-format gathering for those building and questioning AI — from real-world use cases to critical conversations around ethics, bias, and transparency. Every month, we create space for developers, researchers, and policymakers to connect in meaningful, unmoderated dialogue — with flexible chats, peer learning, and 1-on-1 video networking. Watch how it works: https://www.youtube.com/watch?v=bOYkxBjrAYY What Is the Applied AI & Ethics #MonthlyMixer? The Applied AI & Ethics MonthlyMixer is a self-directed online networking event where professionals from across the AI spectrum come together to collaborate, question, and connect. This isn’t a webinar or panel — it’s a structured-free, chat-first environment hosted on a Slack-like virtual platform, featuring topic-specific channels and private video or text chat options. Whether you're designing algorithms, drafting policy, or advocating for responsible tech, this is your space to exchange ideas, showcase projects, or meet others who care about building AI with integrity and impact. Who Should Join:

  • AI/ML developers working on real-world deployments
  • Policy makers and legal experts exploring AI governance
  • Data scientists interested in ethical model design
  • Academics and researchers studying AI and society
  • Ethicists, community leaders, and social impact advocates
  • Product leaders balancing innovation with responsibility

Whether you're launching applied AI tools or questioning their societal effects, this mixer welcomes your perspective. Explore These Channels Inside: The platform provides themed group discussions to help you navigate the topics that matter most:

  • general – Start with introductions, announcements, and cross-topic conversation

  • networking – Share your portfolio, white papers, startups, or ethical AI tools

  • intros – Tell your story: what you work on and why it matters

  • help-wanted – Request or offer guidance around data ethics, fairness, or responsible deployment

  • industry-room-tech – Dive deep into real-world cases, AI misuse, explainability, and bias mitigation

You’ll also be able to chat directly and launch 1-on-1 video calls for deeper conversations. When & Where?

No login hurdles. No presentations. Just people-driven dialogue. Why Join?

  • Global gathering of AI professionals shaping the future responsibly
  • Self-led chats & 1-on-1 networking with values-aligned peers
  • Exchange insights, frameworks, and use cases for applied AI
  • Build your ethical AI toolkit and collaborative network
  • Reconnect monthly to deepen the conversation

Questions or Suggestions? We’re always listening. Contact us anytime at: https://noworkerleftbehind.org/event_support Quick Links:

Hashtags for Discovery & Engagement

AppliedAI #EthicalAI #AIEthics #ResponsibleTech #MonthlyMixer #AIGovernance #AIandSociety #FairAI #BiasInAI #TechForGood #AINetworking #AIProfessionals #GlobalAICommunity

Applied AI Ethics MonthlyMixer - Where Innovation Meets Responsibility