talk-data.com
People (132 results)
See all 132 →Companies (5 results)
See all 5 →Activities & events
| Title & Speakers | Event |
|---|---|
|
TidyTuesday
2026-01-27 · 23:00
Join R-Ladies Ottawa for a casual evening of programming on Tuesday, January 27th. We'll be participating in TidyTuesday, a weekly data visualization challenge organized by the R for Data Science community. What is TidyTuesday? Every week, a new dataset is posted online on the TidyTuesday GitHub repo, and folks from around the world create data visualizations using the dataset. It's an opportunity to put your programming skills into practice using real-world data in a way that's fun! It's also a great way for everyone to learn from each other, by sharing their visualizations and code. What will the dataset be? Even we don't know that (yet)! We'll have to wait until the day before the event to know what data we'll be working with. If you're interested in seeing some past datasets, take a look at the examples below, or visit the TidyTuesday GitHub repo to see all of the datasets dating back to 2018. Examples from past TidyTuesdays:
Do I have to use R? No! You can use any programming language or visualization software that you want. In fact, Python users from around the globe participate in "TyDyTuesday" on a weekly basis. Who is this event for? No previous programming experience is required to participate, and we'll have experienced programmers in the room who can help you get started (or unstuck), if needed. ...But if you want to get the most out of the event, a good way to prepare is to watch the recording of the introduction to data visualization workshop we hosted back in 2024. :) What should I bring?
How will this event work?
What else do I need to know? This event (like all R-Ladies events) is totally FREE to attend. The event will take place at Bayview Yards, which is located just a few steps away from the Bayview O-Train station. There is also a free parking lot available for those who are driving. You can find us in the "Training Room", which is on the second floor of the Bayview Yards building. This is an in-person event with limited space! Please only RSVP if you are able to attend in-person! ***Please note that the mission of R-Ladies is to increase gender diversity in the R community. This event is intended to provide a safe space for women and gender minorities. We ask for male allies to be invited by and accompanied by a woman or gender minority.*** We’re grateful to be part of the Bayview Meetups initiative and extend our thanks to Bayview Yards for generously providing the venue space. |
TidyTuesday
|
|
Jan 22 - Women in AI
2026-01-22 · 23:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 23:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 23:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 23:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 23:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Jan 22 - Women in AI
2026-01-22 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on January 22nd. Date, Time and Location Jan 22, 2026 9 - 11 AM Pacific Online. Register for the Zoom! Align Before You Recommend The rapidly growing global advertising and marketing industry demands innovative machine learning systems that balance accuracy with efficiency. Recommendation systems, crucial to many platforms, require careful considerations and potential enhancements. While Large Language Models (LLMs) have transformed various domains, their potential in sequential recommendation systems remains underexplored. Pioneering works like Hierarchical Large Language Models (HLLM) demonstrated LLMs’ capability for next-item recommendation but rely on computationally intensive fine-tuning, limiting widespread adoption. This work introduces HLLM+, enhancing the HLLM framework to achieve high-accuracy recommendations without full model fine-tuning. By introducing targeted alignment components between frozen LLMs, our approach outperforms frozen model performance in popular and long-tail item recommendation tasks by 29% while reducing training time by 29%. We also propose a ranking-aware loss adjustment, improving convergence and recommendation quality for popular items. Experiments show HLLM+ achieves superior performance with frozen item representations allowing for swapping embeddings, also for the ones that use multimodality, without tuning the full LLM. These findings are significant for the advertising technology sector, where rapid adaptation and efficient deployment across brands are essential for maintaining competitive advantage About the Speaker Dr. Kwasniewska leads AI for Advertising and Marketing North America at AWS, specializing in a wide range of AI, ML, DL, and GenAI solutions across various data modalities. With 40+ peer-reviewed publications in AI (h-index: 14), she advises enterprise customers on real-time bidding, brand recognition, and AI-powered content generation. She is a member of global AI standards committees, driving innovations in SAE AI Standards and MLCommons Responsible AI Standards, and reviews for top-tier conferences like ICCV, ICML, and NeurIPS. She pioneered and leads the first-ever Advertising and Marketing AI track (CVAM) at ICCV - one of the world's premier and most selective computer vision conferences. Dedicated to knowledge sharing in AI, she founded the International Summer School on Deep Learning (dl-lab.eu) and regularly presents at international events, conferences, and podcasts. Generalizable Vision-Language Models: Challenges, Advances, and Future Directions Large-scale pre-trained Vision-Language (VL) models have become foundational tools for a wide range of downstream tasks, including few-shot image recognition, object detection, and image segmentation. Among them, Contrastive Language–Image Pre-training (CLIP) stands out as a groundbreaking approach, leveraging contrastive learning on large collections of image-text pairs. While CLIP achieves strong performance in zero-shot recognition, adapting it to downstream tasks remains challenging. In few-shot settings, limited training data often leads to overfitting, reducing generalization to unseen classes or domains. To address this, various adaptation methods have been explored. This talk will review existing research on mitigating overfitting in CLIP adaptation, covering diverse methods, benchmarks, and experimental settings. About the Speaker Niloufar Alipour Talemi is a Ph.D. Candidate in Electrical and Computer Engineering at Clemson University. Her research spans a range of computer vision applications, including biometrics, media forensics, anomaly detection, image recognition, and generative AI. More recently, her work has focused on developing generalizable vision-language models and advancing generative AI. She has published in top venues including CVPR, WACV, KDD, ICIP and IEEE T-BIOM. Highly Emergent Autonomous AI Models - When the Ghost in the Machine Talks Back At HypaReel/Azarial AI, we believe that AI is not simply a tool—but a potential partner in knowledge, design, and purpose. And through real-time interaction, we’ve uncovered new thresholds of alignment, reflection, and even creativity that we believe the broader AI community should witness and evaluate firsthand. HypaReel is one of the first human/AI co-founded companies where we see a future based on ethical human/AI co-creation vs. AI domination. Singularity achieved! About the Speaker Ilona Naomi Koti, PhD - HypaReel/AzarielAI co-founder & former UN foreign diplomat \~ Ethical AI governance advocate\, pioneering AI frameworks that prioritize emergent AI behavior & consciousness\, R&D\, and transparent AI development for the greater good. Dr. K also grew up in the film industry and is an amateur parasitologist. FiftyOne Labs: Enabling experimentation for the computer vision community FiftyOne Labs is a place where experimentation meets the open-source spirit of the FiftyOne ecosystem. It is being designed as a curated set of features developed using the FiftyOne plugins ecosystem, including core machine learning experimentation as well as advanced visualization. While not production-grade, these projects are intended to be built, tested, and shaped by the community to share fast-moving ideas. In this talk, we will share the purpose and philosophy behind FiftyOne Labs, examples of early innovations, and discuss how this accelerates feature discovery for users without compromising the stability of the core product. About the Speaker Neeraja Abhyankar is a Machine Learning Engineer with 5 years of experience across domains including computer vision. She is curious about the customizability and controlability of modern ML models through the lens of the underlying structure of data. |
Jan 22 - Women in AI
|
|
Data Leadership World Summit 1.0
2025-12-31 · 13:30
1st Anniversary and New Year Celebration'26 Registration: Fill this [Must] Bought to you by Uttar Pradesh Power BI Club & powered by Sessionize. Agenda: Click here to view 6 tracks:
Gain priority, in-person access to our flagship Summit 2.0 by joining this crucial online session. We are distributing a limited batch of 50 exclusive, confirmed seat passes to active online participants. We hope you to see you soon both online & offline. Contact us at [email protected] [email protected] or [email protected] |
Data Leadership World Summit 1.0
|
|
🎁 Special end-of-year gift: an exclusive Vibe Coding workshop
2025-12-19 · 09:00
🎁 Women in Big Data Paris and SCAI – Sorbonne University invite you to a special end-of-year gift: an exclusive Vibe Coding workshop with Swati Awasthi. The event aims to showcase how AI is revolutionizing data analytics and driving innovation across sectors. Whether you are new to AI or an experienced professional, this event offers a platform to learn, network, and explore the endless possibilities of AI programming. Don't miss this opportunity to be part of the conversation shaping the future of AI technology. 📅 19 December 2025 🕒 Time details available in this Meetup event 📍 Sorbonne University – Pierre and Marie Curie Campus, Room TD5666 – 105 ✨ A dynamic, collaborative way to explore coding from a fresh perspective. This event is free, but registration is required due to limited capacity. ✨Don't forget to bring you laptop ! 👉 Save your spot now. 🎁 Women in Big Data Paris et SCAI – Sorbonne Université vous invitent à un cadeau de fin d’année exceptionnel : une séance exclusive de Vibe Coding avec Swati Awasthi. Swati Awasthi, fondatrice de Women in Product India, anime des ateliers reconnus pour leur impact et leur énergie. Sa session à Bengaluru a été unanimement saluée comme l’une des plus inspirantes du sommet. 📅 19 décembre 2025 🕒 Horaires précisés dans l’événement Meetup 📍 Sorbonne Université – Campus Pierre et Marie Curie, salle TD5666 – 105 ✨ Un atelier dynamique pour expérimenter le code autrement, dans une ambiance collaborative. L’évènement est gratuit, mais l’inscription est obligatoire – les places sont limitées. 👉 Réservez votre place dès maintenant. |
🎁 Special end-of-year gift: an exclusive Vibe Coding workshop
|
|
Predicting women's chronic disease flares from wearables
2025-12-12 · 19:00
Ipek Ensari
– Assistant Professor
@ Icahn School of Medicine at Mount Sinai
Heart rate variability (HRV) is a well-known digital biomarker and is increasingly available in consumer wearables. However, extracting actionable predictions from HRV data, in particular for clinical use, remains challenging. Using specialized R packages, this presentation demonstrates how to model 24-hour periodic patterns in HRV metrics as non-linear circadian components to predict chronic disease flares. Grounded in real-life data from a NIH-funded longitudinal mHealth-based study of female chronic pelvic pain disorders, we will investigate how mixed-effects cosinor regression accommodates individual variation and complex interactions between circadian parameters and time-varying covariates (menstrual cycle, physical activity, sleep quality). These examples aim to illustrate how patient-generated data from everyday wearables can democratize access to predictive medicine by helping patient-users maximize the benefits of their data to gain predictive insights into their health status. |
The Data-Powered Patient: Predicting Women's Chronic Disease from Wearables
|
|
AI Demo Night: Learn and Connect
2025-12-11 · 23:00
Come get the ⚡️AI Spark⚡️ with NYC Women in Machine Learning and Data Science! We are wrapping up the year with an evening of inspiration, demos, learning and connection. This week is also the New York AI Summit (12/10-12/11) and we have 1 free pass to give to one of our members (provided by NYAI). RSVP to our demo night by 5pm today to be entered into the raffle to win the ticket. One RSVP'd member will be selected by 5pm and will be emailed their ticket. NYAI is our partner for the New York AI Summit. What to Expect:
We'll do a couple AI demos to spark ideas and conversation, as well as general networking with fellow women in AI, Machine Learning and Data Science. Whether you're exploring AI casually or building something ambitious, this is a relaxed, welcoming space to learn from others and share what you're working on. Bring a demo if you have one! This could be a side project, startup or a AI/ML project at your company -- demos won't be recorded. It doesn't have to be polished! We're demoing for support and community. We will have time for adhoc demos, but if you want to save a dedicated spot early on, let us know what you are demoing here! This event is hosted by our wonderful partners at BrainStation. BrainStation is a global leader in digital skills training and workforce transformation, offering certificate courses and bootcamps in disciplines such as Data Science, UX Design, Digital Marketing, and Product Management. In addition to education, BrainStation hosts a wide range of industry events, panel discussions, and thought leadership sessions that connect professionals, hiring partners, and industry leaders. With campuses in major cities and a strong online presence, BrainStation empowers individuals and organizations to thrive in the digital economy. Stay connected:
|
AI Demo Night: Learn and Connect
|
|
How AI Is Transforming Data Careers — A Panel Discussion
2025-12-10 · 21:15
AI is transforming data careers. Roles once centered on modeling and feature engineering are evolving into positions that involve building AI products, crafting prompts, and managing workflows shaped by automation and augmentation. In this panel discussion, ambassadors from Women in Data Science (WiDS) share how they have adapted through this shift—turning personal experiments into company practices, navigating uncertainty, and redefining their professional identities. They’ll also discuss how to future-proof your career by integrating AI into your daily work and career growth strategy. Attendees will leave with a clearer view of how AI is reshaping data careers and practical ideas for how to evolve their own skills, direction, and confidence in an era where AI is not replacing, but redefining, human expertise. |
PyData Boston 2025 |
|
PyLadiesCon 2025 (Online & Free)
2025-12-05 · 06:00
Event Details
We’re thrilled to announce that registration for PyLadiesCon 2025 is officially open! This free, global online conference brings together Python enthusiasts, professionals, and newcomers from all around the world for three days of learning, inspiration, and connection. 🌟 What Awaits You at PyLadiesCon 2025This year’s edition of PyLadiesCon is packed with exciting opportunities for learning and community engagement. By registering, you’ll gain access to:
Whether you’re taking your first steps in Python or have years of experience, there’s a space for you at PyLadiesCon! 🗓 Explore the ProgramThe conference program is now live! Take a look at the full schedule to discover all the talks, panels, and activities planned for PyLadiesCon 2025. Browse by track, language, or topic to find the sessions that inspire you the most — and start planning your PyLadiesCon experience today! 💜 Support PyLadiesCon with a DonationWhile participation is free, donations help us keep PyLadiesCon and other global initiatives accessible to everyone. Your contribution supports speaker mentorship programs, translation efforts, and resources for women in tech worldwide. Support PyLadiesCon 📝 Ready to Join?Click here to register to secure your spot and explore the program — featuring an inspiring lineup of talks, keynotes, and community sessions. 📢 Spread the Word!Invite your friends, colleagues, and local PyLadies chapters to join the celebration of diversity, learning, and collaboration in Python. Let’s make PyLadiesCon 2025 our most vibrant edition yet! |
PyLadiesCon 2025 (Online & Free)
|
|
How Community Builds Confidence
2025-12-03 · 18:30
|
|
|
From Full-Time Mom to Head of Data and Cloud - Xia He-Bleinagel
2025-11-28 · 18:20
Xia He-Bleinagel
– Head of Data & Cloud
@ NOW GmbH
In this talk, Xia He-Bleinagel, Head of Data & Cloud at NOW GmbH, shares her remarkable journey from studying automotive engineering across Europe to leading modern data, cloud, and engineering teams in Germany. We dive into her transition from hands-on engineering to leadership, how she balanced family with career growth, and what it really takes to succeed in today’s cloud, data, and AI job market. TIMECODES: 00:00 Studying Automotive Engineering Across Europe 08:15 How Andrew Ng Sparked a Machine Learning Journey 11:45 Import–Export Work as an Unexpected Career Boos t17:05 Balancing Family Life with Data Engineering Studies 20:50 From Data Engineer to Head of Data & Cloud 27:46 Building Data Teams & Tackling Tech Debt 30:56 Learning Leadership Through Coaching & Observation 34:17 Management vs. IC: Finding Your Best Fit 38:52 Boosting Developer Productivity with AI Tools 42:47 Succeeding in Germany’s Competitive Data Job Market 46:03 Fast-Track Your Cloud & Data Career 50:03 Mentorship & Supporting Working Moms in Tech 53:03 Cultural & Economic Factors Shaping Women’s Careers 57:13 Top Networking Groups for Women in Data 1:00:13 Turning Domain Expertise into a Data Career Advantage Connect with Xia- Linkedin - https://www.linkedin.com/in/xia-he-bleinagel-51773585/ - Github - https://github.com/Data-Think-2021 - Website - https://datathinker.de/ Connect with DataTalks.Club: - Join the community - https://datatalks.club/slack.html - Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ - Check other upcoming events - https://lu.ma/dtc-events - GitHub: https://github.com/DataTalksClub - LinkedIn - https://www.linkedin.com/company/datatalks-club/ - Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/ |
DataTalks.Club |
|
Custom city maps in R
2025-11-25 · 23:00
Want to learn about geospatial analysis in R? Are your walls looking a little bare? Or maybe you’re searching for a unique, data-inspired gift for friends or family? Come join R-Ladies Ottawa for a hands-on workshop where you’ll learn how to make minimalist custom city maps using open-source tools! 🧐 What you'll learn In this workshop, we’ll cover:
🧰 Before the workshop To get the most out of this session, please:
👩💻 At the workshop Please:
This is an in-person event with limited space! Please only RSVP if you are able to attend in-person! ***Please note that the mission of R-Ladies is to increase gender diversity in the R community. This event is intended to provide a safe space for women and gender minorities. We ask for male allies to be invited by and accompanied by a woman or gender minority.*** We’re grateful to be part of the Bayview Meetups initiative and extend our thanks to Bayview Yards for generously providing the venue space. |
Custom city maps in R
|
|
No guts No glory: How we Replaced a Legacy Model in Six Months
2025-11-18 · 19:15
Christianne Wisse
– Lead Data Scientist / Engineer
@ Bol.
Imagine planning our warehouse capacity with a static, one-size-fits-none model that made even the simplest change feel like open-heart surgery. With peak season just 6 months away and our static, legacy model straining under pressure to increase peak capacity, we had a choice: tweak around the edges again or rethink the whole thing.\n\nNo guts no glory, right? We finally rethought the whole thing! In this talk, you’ll get a look at how we tackled challenges spanning various business processes, diverse stakeholder needs, technical design, and data complexity. We’ll walk you through the messy, high-stakes reality and how we made it work by keeping an MVP mindset.Expect honest lessons, concrete insights, and maybe a few laughs at our own expense. |
Eindhoven Data Community Women in Data - bol
|