talk-data.com
People (1 result)
Activities & events
| Title & Speakers | Event |
|---|---|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
November Meetup at Tesco
2025-11-11 · 18:30
We're hosted this time by Tesco. The event will start at 6.30pm and finish by 9pm, with food and drinks available. This will be an in-person event (but we will try to record the talks and share the videos later). Our first talk will come from our hosts Tesco by Max Grogan (Data Scientist) and George Sykes (Senior Data Science Engineer): Judgement Day: Rethinking Search Evaluation with Language Models "Relevance labels are the backbone of evaluating search performance, yet human labelling is slow, costly, and hard to scale. This talk explores how LLMs can act as automated judges, producing scalable, human-like labels for ecommerce search. We’ll examine our learnings from trying to build such a system including building reliable evaluation datasets as well as the trade-offs of adopting different model and system architectures.". The second talk will come from Bharav Patel (Specialist Solution Architect , OpenSearch, AWS): Query understanding with LLM : Approaches and Optimization in OpenSearch. "Ever wondered how to make your OpenSearch smarter with LLMs? This session dives into how Large Language Models are revolutionizing search and making it way more intuitive for users. We'll walk through the AI-powered search lifecycle and show you how LLMs can supercharge different parts of your search system:
Learn practical tips and real optimization strategies for implementing these LLM features in OpenSearch, whether you're building a new search experience from scratch or improving an existing one to make it smarter and more user-friendly." We'll also make time for Q&A on all things search and AI and some general networking. Our Search Meetup is organised by The Search Juggler, OpenSearch and Eliatra. Please provide your full name and email address when registering for the event as we will need a list of attendees for security. |
November Meetup at Tesco
|
|
Build a Large Language Model (From Scratch)
2025-07-29 · 19:00
We are going through Hands-On Build a Large Language Models (from Scratch) by Sebastian Raschka. The emphasis during the meetups will be to discuss key aspect of the Chapter being covering. Code focused discussions should be done over Discord. Raschka provides a step-by-step guide coding up your own foundation LLM ground up, spanning initial design and creation stages, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Pages being discussed: Please see the latest message (also pinned) in the #current-reading channel in our Discord chat space to see which pages we'll be reviewing in this session. Please note that the session is not recorded and participants are responsible for obtaining their own copy of the text. Discord joining instructions: Buy the book (affiliate links): Book overview: In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You'll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Build a Large Language Model (from Scratch) teaches you how to:
|
Build a Large Language Model (From Scratch)
|
|
Build a Large Language Model (From Scratch)
2025-07-22 · 19:00
We are going through Hands-On Build a Large Language Models (from Scratch) by Sebastian Raschka. The emphasis during the meetups will be to discuss key aspect of the Chapter being covering. Code focused discussions should be done over Discord. Raschka provides a step-by-step guide coding up your own foundation LLM ground up, spanning initial design and creation stages, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Pages being discussed: Please see the latest message (also pinned) in the #current-reading channel in our Discord chat space to see which pages we'll be reviewing in this session. Please note that the session is not recorded and participants are responsible for obtaining their own copy of the text. Discord joining instructions: Buy the book (affiliate links): Book overview: In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You'll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Build a Large Language Model (from Scratch) teaches you how to:
|
Build a Large Language Model (From Scratch)
|
|
Build a Large Language Model (From Scratch)
2025-07-15 · 19:00
We are going through Hands-On Build a Large Language Models (from Scratch) by Sebastian Raschka. The emphasis during the meetups will be to discuss key aspect of the Chapter being covering. Code focused discussions should be done over Discord. Raschka provides a step-by-step guide coding up your own foundation LLM ground up, spanning initial design and creation stages, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Pages being discussed: Please see the latest message (also pinned) in the #current-reading channel in our Discord chat space to see which pages we'll be reviewing in this session. Please note that the session is not recorded and participants are responsible for obtaining their own copy of the text. Discord joining instructions: Buy the book (affiliate links): Book overview: In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You'll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Build a Large Language Model (from Scratch) teaches you how to:
|
Build a Large Language Model (From Scratch)
|
|
Build a Large Language Model (From Scratch)
2025-07-08 · 19:00
We are going through Hands-On Build a Large Language Models (from Scratch) by Sebastian Raschka. The emphasis during the meetups will be to discuss key aspect of the Chapter being covering. Code focused discussions should be done over Discord. Raschka provides a step-by-step guide coding up your own foundation LLM ground up, spanning initial design and creation stages, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Pages being discussed: Please see the latest message (also pinned) in the #current-reading channel in our Discord chat space to see which pages we'll be reviewing in this session. Please note that the session is not recorded and participants are responsible for obtaining their own copy of the text. Discord joining instructions: Buy the book (affiliate links): Book overview: In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You'll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Build a Large Language Model (from Scratch) teaches you how to:
|
Build a Large Language Model (From Scratch)
|
|
Build a Large Language Model (From Scratch)
2025-07-01 · 19:00
We are going through Hands-On Build a Large Language Models (from Scratch) by Sebastian Raschka. The emphasis during the meetups will be to discuss key aspect of the Chapter being covering. Code focused discussions should be done over Discord. Raschka provides a step-by-step guide coding up your own foundation LLM ground up, spanning initial design and creation stages, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Pages being discussed: Please see the latest message (also pinned) in the #current-reading channel in our Discord chat space to see which pages we'll be reviewing in this session. Please note that the session is not recorded and participants are responsible for obtaining their own copy of the text. Discord joining instructions: Buy the book (affiliate links): Book overview: In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You'll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Build a Large Language Model (from Scratch) teaches you how to:
|
Build a Large Language Model (From Scratch)
|
|
Build a Large Language Model (From Scratch)
2025-06-03 · 19:00
We are going through Hands-On Build a Large Language Models (from Scratch) by Sebastian Raschka. The emphasis during the meetups will be to discuss key aspect of the Chapter being covering. Code focused discussions should be done over Discord. Raschka provides a step-by-step guide coding up your own foundation LLM ground up, spanning initial design and creation stages, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Pages being discussed: Please see the latest message (also pinned) in the #current-reading channel in our Discord chat space to see which pages we'll be reviewing in this session. Please note that the session is not recorded and participants are responsible for obtaining their own copy of the text. Discord joining instructions: Buy the book (affiliate links): Book overview: In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You'll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Build a Large Language Model (from Scratch) teaches you how to:
|
Build a Large Language Model (From Scratch)
|
|
Build a Large Language Model (From Scratch)
2025-05-27 · 19:00
We are going through Hands-On Build a Large Language Models (from Scratch) by Sebastian Raschka. The emphasis during the meetups will be to discuss key aspect of the Chapter being covering. Code focused discussions should be done over Discord. Raschka provides a step-by-step guide coding up your own foundation LLM ground up, spanning initial design and creation stages, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Pages being discussed: Please see the latest message (also pinned) in the #current-reading channel in our Discord chat space to see which pages we'll be reviewing in this session. Please note that the session is not recorded and participants are responsible for obtaining their own copy of the text. Discord joining instructions: Buy the book (affiliate links): Book overview: In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You'll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Build a Large Language Model (from Scratch) teaches you how to:
|
Build a Large Language Model (From Scratch)
|
|
Build a Large Language Model (From Scratch)
2025-05-20 · 19:00
We are going through Hands-On Build a Large Language Models (from Scratch) by Sebastian Raschka. The emphasis during the meetups will be to discuss key aspect of the Chapter being covering. Code focused discussions should be done over Discord. Raschka provides a step-by-step guide coding up your own foundation LLM ground up, spanning initial design and creation stages, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Pages being discussed: Please see the latest message (also pinned) in the #current-reading channel in our Discord chat space to see which pages we'll be reviewing in this session. Please note that the session is not recorded and participants are responsible for obtaining their own copy of the text. Discord joining instructions: Buy the book (affiliate links): Book overview: In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You'll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Build a Large Language Model (from Scratch) teaches you how to:
|
Build a Large Language Model (From Scratch)
|
|
Build a Large Language Model (From Scratch)
2025-05-13 · 19:00
We are going through Hands-On Build a Large Language Models (from Scratch) by Sebastian Raschka. The emphasis during the meetups will be to discuss key aspect of the Chapter being covering. Code focused discussions should be done over Discord. Raschka provides a step-by-step guide coding up your own foundation LLM ground up, spanning initial design and creation stages, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Pages being discussed: Please see the latest message (also pinned) in the #current-reading channel in our Discord chat space to see which pages we'll be reviewing in this session. Please note that the session is not recorded and participants are responsible for obtaining their own copy of the text. Discord joining instructions: Buy the book (affiliate links): Book overview: In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You'll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Build a Large Language Model (from Scratch) teaches you how to:
|
Build a Large Language Model (From Scratch)
|
|
Build a Large Language Model (From Scratch)
2025-05-06 · 19:00
We are going through Hands-On Build a Large Language Models (from Scratch) by Sebastian Raschka. The emphasis during the meetups will be to discuss key aspect of the Chapter being covering. Code focused discussions should be done over Discord. Raschka provides a step-by-step guide coding up your own foundation LLM ground up, spanning initial design and creation stages, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Pages being discussed: Please see the latest message (also pinned) in the #current-reading channel in our Discord chat space to see which pages we'll be reviewing in this session. Please note that the session is not recorded and participants are responsible for obtaining their own copy of the text. Discord joining instructions: Buy the book (affiliate links): Book overview: In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You'll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Build a Large Language Model (from Scratch) teaches you how to:
|
Build a Large Language Model (From Scratch)
|
|
NL dbt meetup: 11th Edition
2025-04-10 · 15:30
Our friends at Floryn are hosting the upcoming event at their office in Den Bosch (5 min walk from train station). 17:30 – 🍕 Welcome 18:00 – 🎤 Lights, dbt, Action: Making Analytics Engineering Visible (and Fast) Annebelle Olminkhof (Data Analyst) & Tijs Bronnenberg (Business Analyst) @ Floryn 18:30 – 🎤 Data analyst AI Agent powered by dbt Metadata - Daniel Herrera (Analytics Engineer & Developer Advocate) @ Teradata 19:00 –🥤 Drinks & Snacks --- About the talks 🎤 Lights, dbt, Action: Making Analytics Engineering Visible (and Fast) In 2022, our team of three data analysts at Floryn implemented dbt to build a more scalable and structured analytics workflow. At the time, most of our business logic was embedded in LookML within Looker, and dbt was more of a “nice to have” than a core component of our workflow. That changed last year when we migrated to a new BI tool, forcing us to extract all our LookML-based transformations into dbt. This transition made us realize how much of our logic had been siloed within Looker, and it became the catalyst for fully centralizing our data models in dbt. By making dbt the foundation of our data analytics products, we standardized data transformation, improved data quality, and created a more scalable approach to managing our data. Beyond improving our data models, dbt has enabled us to develop entirely new analytics products that wouldn’t have been possible before. With dbt as our single source of truth, our analytics engineers can now build cleaner, more reliable models while ensuring consistency across all reporting and analysis. We’ve leveraged dbt to develop metric trees that provide deeper insights into business performance as well as a data-driven warning system. By making dbt central to our analytics strategy, we’ve enhanced trust in our data and unlocked new opportunities for delivering meaningful insights. In this talk, we'll share our journey from LookML-dependent modeling to a fully dbt-driven analytics framework, the challenges we faced, and the lessons we learned along the way. Whether you're considering dbt for your organization or looking to scale your analytics capabilities, our story highlights the power of a well-structured, centralized data strategy 🎤 Data analyst AI Agent powered by dbt Metadata Generative AI adjacent concepts terms like "Agentic AI" or "vibe coding," are frequently used or misused as marketing hooks rather than as practical frameworks for understanding the technical reality of generative AI. In this talk, we aim to cut through the noise by building a data analyst AI agent completely from scratch. We will not rely on any libraries or frameworks. Instead, we will focus on what it actually takes to create an agent that can generate insights from data. One of the biggest challenges when working with large language models for sql query generation is providing the right context. Helping the model understand the structure and meaning of your data — including databases, tables, and columns and what they contain — is often the hardest part. However, if you are using a dbt, you already have access to rich metadata that can be passed to the model. This gives it the necessary understanding of the available data structures to generate accurate queries. In this session, we will walk through how to use that metadata effectively and how to connect an LLM to your database. By the end, you’ll have a clear understanding of what it takes to build a functioning data analyst agent from the ground up. --- Join the dbt Slack community: https://community.getdbt.com/ Join the conversation in the #local-netherlands channel in dbt Slack to connect with other data practitioners locally. To attend, please read the Required Participation Language for In-Person Events with dbt Labs: https://www.getdbt.com/legal/health-and-safety-policy |
NL dbt meetup: 11th Edition
|
|
Pré-entrainement et finetuning des LLMs à partir de zéro (from scratch)
2025-03-09 · 15:00
Plongez dans l'univers fascinant des modèles de langage en participant à notre série d'événements ! Basée sur l'ouvrage "Build a Large Language Model from Scratch" de Sebastian Raschka (https://www.manning.com/books/build-a-large-language-model-from-scratch), cette série d'événements se focalise sur deux aspects fondamentaux : 1. La création complète d'un modèle de langage depuis zéro (from scratch) 2. Les techniques d'ajustement (fine-tuning) d'un modèle pré-entraîné Format des sessions : - Chaque rencontre se concentre sur un chapitre spécifique du livre - Les sessions alternent entre présentation théorique et mise en pratique - Un temps dédié aux questions et discussions permet d'approfondir les concepts complexes Pour tirer le meilleur parti de cette expérience, nous vous encourageons à : - Lire le chapitre correspondant avant chaque session - Préparer vos questions et observations - Partager vos réflexions lors des discussions de groupe Prérequis techniques : - Maîtrise de la programmation orientée objet en Python - Curiosité et envie d'explorer les mécanismes internes des modèles de langage Cette dernière session est spéciale car nous avons l'honneur d'accueillir l'auteur du livre lui-même Sebastian Raschka qui nous prodiguera des conseils et nous fournira des recommandations pour aller plus loin en Machine Learning, particulièrement en Traitement Automatique des langues et LLMs. Nous aurons également l'occasion de lui poser nos questions. Programme de la dernière session de la série ce dimanche 09/03/2025 : 1. Présentation du chapitre 7. Finetuning to follow instructions. Comment ajuster un LLM pré-entraîné pour qu'il soit à mesure de suivre des instructions : 16h00 à 17h00 2. Questions et réponses sur le chapitre 7 : 17h00 à 17h10 3. Conseils et recommandations de Sebastian Raschka pour aller plus loin : 17h10 à 17h30 4. Questions et réponses avec Sebastian Raschka : 17h30 à 18h00 Rejoignez-nous pour cette aventure passionnante au cœur des technologies qui façonnent l'avenir du traitement du langage naturel ! |
Pré-entrainement et finetuning des LLMs à partir de zéro (from scratch)
|