talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (22 results)

See all 22 →

Activities & events

Title & Speakers Event
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 23:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 17:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 17:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI
Jan 22 - Women in AI 2026-01-22 · 17:00

Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd.

Date, Time and Location

Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Align Before You Recommend

The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements.

While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning.

By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items.

Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage

About the Speaker

Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts.

Generalizable Vision-Language Models: Challenges, Advances, and Future Directions

Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings.

About the Speaker

Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM.

Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back

At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved!

About the Speaker

Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist.

FiftyOne Labs: Enabling experimentation for the computer vision community

FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product.

About the Speaker

Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data.

Jan 22 - Women in AI

These are the notes of the previous "How to Build a Portfolio That Reflects Your Real Skills" event:

Properties of an ideal portfolio repository:

  • Built to prove employable skills and readiness for real work
  • Fewer projects, carefully chosen to match job requirements
  • Clean, readable, refactored code, and follows best practices
  • Detailed READMEs (setup, features, tech stack, decisions, how to deploy, testing strategy, etc)
  • Logical, meaningful commits that show development process <- you can follow the git history for important commits/features
  • Clear architecture (layers, packages, separation of concerns) <- use best practices
  • Unit and integration tests included and explained <-- also talk about them in the README
  • Proper validation, exceptions, and edge case handling
  • Polished, complete, production-like projects only
  • “Can this person work on our codebase?” <-- reviewers will ask this
  • Written for recruiters, hiring managers, and senior engineers
  • Uses industry-relevant and job-listed technologies <- tech stak should match the CV
  • Well-scoped, realistic features similar to real products
  • Consistent style, structure, and conventions across projects
  • Environment variables, clear setup steps, sample configs
  • Minimal, justified dependencies with clear versioning
  • Proper logging, and meaningful log messages
  • No secrets committed, basic security best practices applied
  • Shows awareness of scaling, performance, and future growth <- at least have a "possible improvements" section in the README
  • a list of ADRs explains design choices and trade-offs <- should be a part of the documentation

📌 Backend & Frontend Portfolio Project Ideas

These projects are intentionally reusable across tech stacks. Following tutorials and reusing patterns is expected — what matters is:

  • understanding the architecture
  • explaining trade-offs
  • documenting decisions clearly

☕ Junior Java Backend Developer (Spring Boot)

1. Shop Manager Application

A monolithic Spring Boot app designed with microservice-style boundaries. Features

  • Secure user registration & login
  • Role-based access control using JWT
  • REST APIs for:
  • Users
  • Products
  • Inventory
  • Orders
  • Automatic inventory updates when orders are placed
  • CSV upload for bulk product & inventory import
  • Clear service boundaries (UserService, OrderService, InventoryService, etc.)

Engineering Focus

  • Clean architecture (controllers, services, repositories)
  • Global exception handling
  • Database migrations (Flyway/Liquibase)
  • Unit & integration testing
  • Clear README explaining architecture decisions

2. Parallel Data Processing Engine

Backend service for processing large datasets efficiently. Features

  • Upload large CSV/log files
  • Split data into chunks
  • Process chunks in parallel using:
  • ExecutorService
  • CompletableFuture
  • Aggregate and return results

Demonstrates

  • Java concurrency
  • Thread pools & async execution
  • Performance optimization

3. Distributed Task Queue System

Simple async job processing system. Features

  • One service submits tasks
  • Another service processes them asynchronously
  • Uses Kafka or RabbitMQ
  • Tasks: report generation, data transformation

Demonstrates

  • Message-driven architecture
  • Async workflows
  • Eventual consistency

4. Rate Limiting & Load Control Service

Standalone service that protects APIs from abuse. Features

  • Token bucket or sliding window algorithms
  • Redis-backed counters
  • Per-user or per-IP limits

Demonstrates

  • Algorithmic thinking
  • Distributed state
  • API protection patterns

5. Search & Indexing Backend

Document or record search service. Features

  • In-memory inverted index
  • Text search, filters, ranking
  • Optional Elasticsearch integration

Demonstrates

  • Data structures
  • Read-optimized design
  • Trade-offs between custom vs external tools

6. Distributed Configuration & Feature Flag Service

Centralized config service for other apps. Features

  • Key-value configuration store
  • Feature flags
  • Caching & refresh mechanisms

Demonstrates

  • Caching strategies
  • Consistency vs availability trade-offs
  • System design for shared services

🐹 Mid-Level Go Backend Developer (Non-Kubernetes)

1. High-Throughput Event Processing Pipeline

Multi-stage concurrent pipeline. Features

  • HTTP/gRPC ingestion
  • Validation & transformation stages
  • Goroutines & channels
  • Worker pools, batching, backpressure
  • Graceful shutdown

2. Distributed Job Scheduler & Worker System

Async job execution platform. Features

  • Job scheduling & delayed execution
  • Retries & idempotency
  • Job states (pending, running, failed, completed)
  • Message queue or gRPC-based workers

3. In-Memory Caching Service

Redis-like cache written from scratch. Features

  • TTL support
  • Eviction strategies (LRU/LFU)
  • Concurrent-safe access
  • Optional disk persistence

4. Rate Limiting & Traffic Shaping Gateway

Reverse-proxy-style rate limiter. Features

  • Token bucket / leaky bucket
  • Circuit breakers
  • Redis-backed distributed limits

5. Log Aggregation & Query Engine

Incrementally built system: Step-by-step

  1. REST API + Postgres (store logs, query logs)
  2. Optimize for massive concurrency
  3. Replace DB with in-memory data structures
  4. Add streaming endpoints using channels & batching

🐍 Mid-Level Python Backend Developer

1. Asynchronous Task Processing System

Async job execution platform. Features

  • Async API submission
  • Worker pool (asyncio or Celery-like)
  • Retries & failure handling
  • Job status tracking
  • Idempotency

2. Event-Driven Data Pipeline

Streaming data processing service. Features

  • Event ingestion
  • Validation & transformation
  • Batching & backpressure handling
  • Output to storage or downstream services

3. Distributed Rate Limiting Service

API protection service. Steps

  • Step 1: Use an existing rate-limiting library
  • Step 2: Implement token bucket / sliding window yourself

4. Search & Indexing Backend

Search system for logs or documents. Features

  • Custom indexing or Elasticsearch
  • Filtering & time-based queries
  • Read-heavy optimization

5. Configuration & Feature Flag Service

Shared configuration backend. Steps

  • Step 1: Use a caching library
  • Step 2: Implement your own cache (explain in README)

🟦 Mid-Level TypeScript Backend Developer

1. Asynchronous Job Processing System

Queue-based task execution. Features

  • BullMQ / RabbitMQ / Redis
  • Retries & scheduling
  • Status tracking

2. Real-Time Chat / Notification Service

WebSocket-based system. Features

  • Presence tracking
  • Message persistence
  • Real-time updates

3. Rate Limiting & API Gateway

API gateway with protections. Features

  • Token bucket / sliding window
  • Response caching
  • Request logging

4. Search & Filtering Engine

Search backend for products, logs, or articles. Features

  • In-memory index or Elasticsearch
  • Pagination & sorting

5. Feature Flag & Configuration Service

Centralized config management. Features

  • Versioning
  • Rollout strategies
  • Caching

🟨 Mid-Level Node.js Backend Developer

1. Async Task Queue System

Background job processor. Features

  • Bull / Redis / RabbitMQ
  • Retries & scheduling
  • Status APIs

2. Real-Time Chat / Notification Service

Socket-based system. Features

  • Rooms
  • Presence tracking
  • Message persistence

3. Rate Limiting & API Gateway

Traffic control service. Features

  • Per-user/API-key limits
  • Logging
  • Optional caching

4. Search & Indexing Backend

Indexing & querying service.


5. Feature Flag / Configuration Service

Shared backend for app configs.


⚛️ Mid-Level Frontend Developer (React / Next.js)

1. Dynamic Analytics Dashboard

Interactive data visualization app. Features

  • Charts & tables
  • Filters & live updates
  • React Query / Redux / Zustand
  • Responsive layouts

2. E-Commerce Store

Full shopping experience. Features

  • Product listings
  • Search, filters, sorting
  • Cart & checkout
  • SSR/SSG with Next.js

3. Real-Time Chat / Collaboration App

Live multi-user UI. Features

  • WebSockets or Firebase
  • Presence indicators
  • Real-time updates

4. CMS / Blogging Platform

SEO-focused content app. Features

  • SSR for SEO
  • Markdown or API-based content
  • Admin editing interface

5. Personalized Analytics / Recommendation UI

Data-heavy frontend. Features

  • Filtering & lazy loading
  • Large dataset handling
  • User-specific insights

6. AI Chatbot App — “My House Plant Advisor”

LLM-powered assistant with production-quality UX. Core Features

  • Chat interface with real-time updates
  • Input normalization & validation
  • Offensive content filtering
  • Unsupported query detection
  • Rate limiting (per user)
  • Caching recent queries
  • Conversation history per session
  • Graceful fallbacks & error handling

Advanced Features

  • Prompt tuning (beginner vs expert users)
  • Structured advice formatting (cards, bullets)
  • Local LLM support
  • Analytics dashboard (popular questions)
  • Voice input/output (speech-to-text, TTS)

✅ Final Advice

You do NOT need to build everything. Instead, pick 1–2 strong projects per role and focus on depth:

  • Explain the architecture clearly
  • Document trade-offs (why you chose X over Y)
  • Show incremental improvements
  • Prove you understand why, not just how

📌 Portfolio Quality Signals (Very Important)

  • Have a large, organic commit history → A single or very few commits is a strong indicator of copy-paste work.
  • Prefer 3–5 complex projects over 20 simple ones → Many tiny projects often signal shallow understanding.

🎯 Why This Helps in Interviews

Working on serious projects gives you:

  • Real hands-on practice
  • Concrete anecdotes (stories you can tell in interviews)
  • A safe way to learn technologies you don’t fully know yet
  • Better focus and long-term learning discipline
  • A portfolio that can be ported to another tech stack later (Java → Go, Node → Python, etc.)

🎥 Demo & Documentation Best Practices

  • Create a 2–3 minute demo / walkthrough video
  • Show the app running
  • Explain what problem it solves
  • Highlight one or two technical decisions
  • At the top of every README:
  • Add a plain-English paragraph explaining what the project does
  • Assume the reader is a complete beginner

🤝 Open Source & Personal Projects (Interview Signal)

Always mention that you have contributed to Open Source or built personal projects.

  • Shows team spirit
  • Shows you can read, understand, and navigate an existing codebase
  • Signals that you can onboard into a real-world repository
  • Makes you sound like an engineer, not just a tutorial follower
[Notes]How to Build a Portfolio That Reflects Your Real Skills
Jozef de Vries – author , Tom Taulli – author , Benjamin Anderson – author

In a world where data sovereignty, scalability, and AI innovation are at the forefront of enterprise strategy, PostgreSQL is emerging as the key to unlocking transformative business value. This new guide serves as your beacon for navigating the convergence of AI, open source technologies, and intelligent data platforms. Authors Tom Taulli, Benjamin Anderson, and Jozef de Vries offer a strategic and practical approach to building AI and data platforms that balance innovation with governance, empowering organizations to take control of their data future. Whether you're designing frameworks for advanced AI applications, modernizing legacy infrastructures, or solving data challenges at scale, you can use this guide to bridge the gap between technical complexity and actionable strategy. Written for IT executives, data leaders, and practitioners alike, it will equip you with the tools and insights to harness Postgre's unique capabilities—extensibility, unstructured data management, and hybrid workloads—for long-term success in an AI-driven world. Learn how to build an AI and data platform using PostgreSQL Overcome data challenges like modernization, integration, and governance Optimize AI performance with model fine-tuning and retrieval-augmented generation (RAG) best practices Discover use cases that align data strategy with business goals Take charge of your data and AI future with this comprehensive and accessible roadmap

data data-engineering relational-databases postgresql AI/ML Data Management RAG
O'Reilly Data Engineering Books

The OSS AI Summit is an interactive online event designed to showcase how open-source AI frameworks empower developers to build, innovate, and scale faster.

This first event spotlights LangChain, the leading framework for creating AI-driven applications in Python and JavaScript. Attendees will explore core LangChain components, agents, and tools, learn how Azure AI enhances scalability and integration, and see real-world demonstrations of agent workflows powered by LangChain and MCP.

The event concludes with a LangChain Live Q&A Panel, featuring experts from the LangChain team and Microsoft Cloud Advocacy, ready to answer questions and share insights on the future of open-source AI.

Register here to have resources delivered to your inbox

OSS AI Summit: Building with LangChain

The OSS AI Summit is an interactive online event designed to showcase how open-source AI frameworks empower developers to build, innovate, and scale faster.

This first event spotlights LangChain, the leading framework for creating AI-driven applications in Python and JavaScript. Attendees will explore core LangChain components, agents, and tools, learn how Azure AI enhances scalability and integration, and see real-world demonstrations of agent workflows powered by LangChain and MCP.

The event concludes with a LangChain Live Q&A Panel, featuring experts from the LangChain team and Microsoft Cloud Advocacy, ready to answer questions and share insights on the future of open-source AI.

Register here to have resources delivered to your inbox

OSS AI Summit: Building with LangChain

The OSS AI Summit is an interactive online event designed to showcase how open-source AI frameworks empower developers to build, innovate, and scale faster.

This first event spotlights LangChain, the leading framework for creating AI-driven applications in Python and JavaScript. Attendees will explore core LangChain components, agents, and tools, learn how Azure AI enhances scalability and integration, and see real-world demonstrations of agent workflows powered by LangChain and MCP.

The event concludes with a LangChain Live Q&A Panel, featuring experts from the LangChain team and Microsoft Cloud Advocacy, ready to answer questions and share insights on the future of open-source AI.

Register here to have resources delivered to your inbox

OSS AI Summit: Building with LangChain

External registration: https://events.thealliance.ai/pytorch-x-ai-alliance

Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents.

About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI.

AI Alliance: PyTorch & The Agentic Stack

External registration: https://events.thealliance.ai/pytorch-x-ai-alliance

Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents.

About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI.

AI Alliance: PyTorch & The Agentic Stack

External registration: https://events.thealliance.ai/pytorch-x-ai-alliance

Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents.

About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI.

AI Alliance: PyTorch & The Agentic Stack

External registration: https://events.thealliance.ai/pytorch-x-ai-alliance

Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents.

About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI.

AI Alliance: PyTorch & The Agentic Stack

External registration: https://events.thealliance.ai/pytorch-x-ai-alliance

Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents.

About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI.

AI Alliance: PyTorch & The Agentic Stack

External registration: https://events.thealliance.ai/pytorch-x-ai-alliance

Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents.

About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI.

AI Alliance: PyTorch & The Agentic Stack

External registration: https://events.thealliance.ai/pytorch-x-ai-alliance

Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents.

About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI.

AI Alliance: PyTorch & The Agentic Stack