talk-data.com
People (22 results)
See all 22 →Activities & events
| Title & Speakers | Event |
|---|---|
|
Jan 22 - Women in AI
2026-01-22 · 23:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 23:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 23:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 23:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 23:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
[Notes]How to Build a Portfolio That Reflects Your Real Skills
2025-12-28 · 18:00
These are the notes of the previous "How to Build a Portfolio That Reflects Your Real Skills" event: Properties of an ideal portfolio repository:
📌 Backend & Frontend Portfolio Project Ideas
☕ Junior Java Backend Developer (Spring Boot)1. Shop Manager ApplicationA monolithic Spring Boot app designed with microservice-style boundaries. Features
Engineering Focus
2. Parallel Data Processing EngineBackend service for processing large datasets efficiently. Features
Demonstrates
3. Distributed Task Queue SystemSimple async job processing system. Features
Demonstrates
4. Rate Limiting & Load Control ServiceStandalone service that protects APIs from abuse. Features
Demonstrates
5. Search & Indexing BackendDocument or record search service. Features
Demonstrates
6. Distributed Configuration & Feature Flag ServiceCentralized config service for other apps. Features
Demonstrates
🐹 Mid-Level Go Backend Developer (Non-Kubernetes)1. High-Throughput Event Processing PipelineMulti-stage concurrent pipeline. Features
2. Distributed Job Scheduler & Worker SystemAsync job execution platform. Features
3. In-Memory Caching ServiceRedis-like cache written from scratch. Features
4. Rate Limiting & Traffic Shaping GatewayReverse-proxy-style rate limiter. Features
5. Log Aggregation & Query EngineIncrementally built system: Step-by-step
🐍 Mid-Level Python Backend Developer1. Asynchronous Task Processing SystemAsync job execution platform. Features
2. Event-Driven Data PipelineStreaming data processing service. Features
3. Distributed Rate Limiting ServiceAPI protection service. Steps
4. Search & Indexing BackendSearch system for logs or documents. Features
5. Configuration & Feature Flag ServiceShared configuration backend. Steps
🟦 Mid-Level TypeScript Backend Developer1. Asynchronous Job Processing SystemQueue-based task execution. Features
2. Real-Time Chat / Notification ServiceWebSocket-based system. Features
3. Rate Limiting & API GatewayAPI gateway with protections. Features
4. Search & Filtering EngineSearch backend for products, logs, or articles. Features
5. Feature Flag & Configuration ServiceCentralized config management. Features
🟨 Mid-Level Node.js Backend Developer1. Async Task Queue SystemBackground job processor. Features
2. Real-Time Chat / Notification ServiceSocket-based system. Features
3. Rate Limiting & API GatewayTraffic control service. Features
4. Search & Indexing BackendIndexing & querying service. 5. Feature Flag / Configuration ServiceShared backend for app configs. ⚛️ Mid-Level Frontend Developer (React / Next.js)1. Dynamic Analytics DashboardInteractive data visualization app. Features
2. E-Commerce StoreFull shopping experience. Features
3. Real-Time Chat / Collaboration AppLive multi-user UI. Features
4. CMS / Blogging PlatformSEO-focused content app. Features
5. Personalized Analytics / Recommendation UIData-heavy frontend. Features
6. AI Chatbot App — “My House Plant Advisor”LLM-powered assistant with production-quality UX. Core Features
Advanced Features
✅ Final AdviceYou do NOT need to build everything. Instead, pick 1–2 strong projects per role and focus on depth:
📌 Portfolio Quality Signals (Very Important)
🎯 Why This Helps in InterviewsWorking on serious projects gives you:
🎥 Demo & Documentation Best Practices
🤝 Open Source & Personal Projects (Interview Signal)Always mention that you have contributed to Open Source or built personal projects.
|
[Notes]How to Build a Portfolio That Reflects Your Real Skills
|
|
In a world where data sovereignty, scalability, and AI innovation are at the forefront of enterprise strategy, PostgreSQL is emerging as the key to unlocking transformative business value. This new guide serves as your beacon for navigating the convergence of AI, open source technologies, and intelligent data platforms. Authors Tom Taulli, Benjamin Anderson, and Jozef de Vries offer a strategic and practical approach to building AI and data platforms that balance innovation with governance, empowering organizations to take control of their data future. Whether you're designing frameworks for advanced AI applications, modernizing legacy infrastructures, or solving data challenges at scale, you can use this guide to bridge the gap between technical complexity and actionable strategy. Written for IT executives, data leaders, and practitioners alike, it will equip you with the tools and insights to harness Postgre's unique capabilities—extensibility, unstructured data management, and hybrid workloads—for long-term success in an AI-driven world. Learn how to build an AI and data platform using PostgreSQL Overcome data challenges like modernization, integration, and governance Optimize AI performance with model fine-tuning and retrieval-augmented generation (RAG) best practices Discover use cases that align data strategy with business goals Take charge of your data and AI future with this comprehensive and accessible roadmap |
O'Reilly Data Engineering Books
|
|
OSS AI Summit: Building with LangChain
2025-12-10 · 16:00
The OSS AI Summit is an interactive online event designed to showcase how open-source AI frameworks empower developers to build, innovate, and scale faster. This first event spotlights LangChain, the leading framework for creating AI-driven applications in Python and JavaScript. Attendees will explore core LangChain components, agents, and tools, learn how Azure AI enhances scalability and integration, and see real-world demonstrations of agent workflows powered by LangChain and MCP. The event concludes with a LangChain Live Q&A Panel, featuring experts from the LangChain team and Microsoft Cloud Advocacy, ready to answer questions and share insights on the future of open-source AI. |
OSS AI Summit: Building with LangChain
|
|
OSS AI Summit: Building with LangChain
2025-12-10 · 16:00
The OSS AI Summit is an interactive online event designed to showcase how open-source AI frameworks empower developers to build, innovate, and scale faster. This first event spotlights LangChain, the leading framework for creating AI-driven applications in Python and JavaScript. Attendees will explore core LangChain components, agents, and tools, learn how Azure AI enhances scalability and integration, and see real-world demonstrations of agent workflows powered by LangChain and MCP. The event concludes with a LangChain Live Q&A Panel, featuring experts from the LangChain team and Microsoft Cloud Advocacy, ready to answer questions and share insights on the future of open-source AI. |
OSS AI Summit: Building with LangChain
|
|
OSS AI Summit: Building with LangChain
2025-12-10 · 16:00
The OSS AI Summit is an interactive online event designed to showcase how open-source AI frameworks empower developers to build, innovate, and scale faster. This first event spotlights LangChain, the leading framework for creating AI-driven applications in Python and JavaScript. Attendees will explore core LangChain components, agents, and tools, learn how Azure AI enhances scalability and integration, and see real-world demonstrations of agent workflows powered by LangChain and MCP. The event concludes with a LangChain Live Q&A Panel, featuring experts from the LangChain team and Microsoft Cloud Advocacy, ready to answer questions and share insights on the future of open-source AI. |
OSS AI Summit: Building with LangChain
|
|
AI Alliance: PyTorch & The Agentic Stack
2025-12-05 · 18:00
External registration: https://events.thealliance.ai/pytorch-x-ai-alliance Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents. About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI. |
AI Alliance: PyTorch & The Agentic Stack
|
|
AI Alliance: PyTorch & The Agentic Stack
2025-12-05 · 18:00
External registration: https://events.thealliance.ai/pytorch-x-ai-alliance Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents. About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI. |
AI Alliance: PyTorch & The Agentic Stack
|
|
AI Alliance: PyTorch & The Agentic Stack
2025-12-05 · 18:00
External registration: https://events.thealliance.ai/pytorch-x-ai-alliance Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents. About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI. |
AI Alliance: PyTorch & The Agentic Stack
|
|
AI Alliance: PyTorch & The Agentic Stack
2025-12-05 · 18:00
External registration: https://events.thealliance.ai/pytorch-x-ai-alliance Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents. About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI. |
AI Alliance: PyTorch & The Agentic Stack
|
|
AI Alliance: PyTorch & The Agentic Stack
2025-12-05 · 18:00
External registration: https://events.thealliance.ai/pytorch-x-ai-alliance Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents. About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI. |
AI Alliance: PyTorch & The Agentic Stack
|
|
AI Alliance: PyTorch & The Agentic Stack
2025-12-05 · 18:00
External registration: https://events.thealliance.ai/pytorch-x-ai-alliance Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents. About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI. |
AI Alliance: PyTorch & The Agentic Stack
|
|
AI Alliance: PyTorch & The Agentic Stack
2025-12-05 · 18:00
External registration: https://events.thealliance.ai/pytorch-x-ai-alliance Join Meta's Joe Spisak for an exclusive look at 6 groundbreaking projects shaping the future of agentic development—from low-level kernels to production-ready agents. About the presenter Joe Spisak (Product Director, Meta Superintelligence Labs) leads product efforts across PyTorch and Meta's Agentic platform with over a decade of experience building AI platforms at Meta, Google, and Amazon. He helped make PyTorch the world's leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. Joe also spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. He's an active angel investor and advisor to next-generation AI startups including Anthropic, General Reasoning, and ReflectionAI. |
AI Alliance: PyTorch & The Agentic Stack
|