talk-data.com
People (8 results)
See all 8 →Activities & events
| Title & Speakers | Event |
|---|---|
|
ClickHouse Gurgaon/Delhi Meetup
2026-01-10 · 05:00
Start 2026 with the ClickHouse India community in Gurgaon! Connect with fellow data practitioners and hear from industry experts through engaging talks focused on lessons learned, best practices, and modern data challenges. Agenda:
👉🏼 RSVP to secure your spot! Interested in speaking at this meetup or future ClickHouse events? 🎤Shoot an email to [email protected] and she'll be in touch. ******** 🎤 Session Details: Inside ClickStack: Engineering Observability for Scale Dive deep into ClickStack, ClickHouse’s fresh approach to observability built for engineers who care about speed, scale, and simplicity. We’ll unpack the technical architecture behind how ClickStack handles metrics, logs, and traces using ClickHouse as the backbone for real-time, high-cardinality analytics. Expect a hands-on look at ingestion pipelines, schema design patterns, query optimization, and the integrations that make ClickStack tick. Speaker: Rakesh Puttaswamy, Lead Solutions Architect @ ClickHouse 🎤 Session Details: Supercharging Personalised Notifications At Jobhai With ClickHouse Calculating personalized alerts for 2 million users is a data-heavy challenge that requires more than just standard indexing. This talk explores how Jobhai uses ClickHouse to power its morning notification pipeline, focusing on the architectural shifts and query optimizations that made our massive scale manageable and fast. Speaker: Sumit Kumar and Arvind Saini, Tech Leads @ Info Edge Sumit is a seasoned software engineer with deep expertise in databases, backend systems, and machine learning. For over six years, he has led the Jobhai engineering team, driving continuous improvements across their database infrastructure and user-facing systems while streamlining workflows through ongoing innovation. Connect with Sumit Kumar on LinkedIn. Arvind is a Tech Lead at Info Edge India Ltd with experience building and scaling backend systems for large consumer and enterprise platforms. Over the years, they have worked across system design, backend optimization, and data-driven services, contributing to initiatives such as notification platforms, workflow automation, and product revamps. Their work focuses on improving reliability, performance, and scalability of distributed systems, and they enjoy solving complex engineering problems while mentoring teams and driving technical excellence. 🎤 Session Details: Simplifying CDC: Migrating from Debezium to ClickPipes In this talk, Abhash will share their engineering team's journey migrating our core MySQL and MongoDB CDC flows to ClickPipes. We will contrast our previous architecture—where every schema change required manual intervention or complex Debezium configurations—with the new reality of ClickPipes' automated schema evolution, which seamlessly handles upstream schema changes and ingests flexible data without breaking pipelines. Speaker: Abhash Solanki, DevOps Engineer @ Spyne AI Abhash serves as a DevOps Engineer at Spyne, orchestrating the AWS infrastructure behind the company's data warehouse and CDC pipelines. Having managed complex self-hosted Debezium and Kafka clusters, he understands the operational overhead of running stateful data stacks in the cloud. He recently led the architectural shift to ClickHouse Cloud, focusing on eliminating engineering toil and automating schema evolution handling. 🎤 Session Details: Solving Analytics at Scale: From CDC to Actionable Insights As SAMARTH’s data volumes grew rapidly, our analytics systems faced challenges with frequent data changes and near real-time reporting. These challenges were compounded by the platform’s inherently high cardinality in multidimensional data models - spanning institutions, programmes, states, categories, workflow stages, and time, resulting in highly complex and dynamic query patterns. This talk describes how we evolved from basic CDC pipelines to a fast, reliable, and scalable near real-time analytics platform using ClickHouse. We share key design and operational learnings that enabled us to process continuous high-volume transactional data and deliver low-latency analytics for operational monitoring and policy-level decision-making. Speaker: Kunal Sharma, Software Developer @ Samarth eGov Kunal Sharma is a data-focused professional with experience in building scalable data pipelines. His work includes designing and implementing robust ETL/ELT workflows, data-driven decision engines, and large-scale analytics platforms. At SAMARTH, he has contributed to building near real-time analytics systems, including the implementation of ClickHouse for large-scale, low-latency analytics. |
ClickHouse Gurgaon/Delhi Meetup
|
|
Cloud Native + AI Winter edition!
2025-12-15 · 16:30
Introduction: Welcome to the last meetup of the year! This time we have again a great international name all the way from Denmark, Kaspar is a veteran of the Cloud Native community and an inspiration to many. Agenda: 🤝17:30-18:10 Walk-in 👀 18:10-18:15 Welcome from Dash0 🎤 8:15-18:45 1st Talk: "Breaking Free with Open Standards: OpenTelemetry and Perses for Observability" by Kasper Borg Nissen, Principal Developer Advocate at Dash0 🍕18:45-18:55 Break & food 🎤18:55-19:40 2nd talk: "Lessons learn creating Walrus (High Performance kafka alternative written in Rust)" by Daksh R, 🍻20:30 - 21:30 Networking & drinks (open for impromptu lighting talk) 📌21:30 End 1st Talk: Title: Breaking Free with Open Standards: OpenTelemetry and Perses for Observability Description: Observability is the backbone of modern cloud-native applications, but many organizations find themselves locked into proprietary tools with rising costs, rigid ecosystems, and limited flexibility. In this talk, we’ll explore how open observability standards like OpenTelemetry for instrumentation and Perses for monitoring-as-code are transforming the landscape by enabling vendor-neutral, scalable, and future-proof observability stacks. We’ll start with an introduction to OpenTelemetry, covering how to get started, instrument applications, and provide developer-friendly abstractions for seamless auto-instrumentation. From there, we’ll dive into Perses, a CNCF Sandbox project that brings open, declarative, and portable dashboards to observability. By building on these open standards, organizations gain the freedom to mix and match storage, visualization, and analytics tools without being tied to a single vendor. Join this session to learn how OpenTelemetry and Perses can help you scale observability, stay in control of your data, and ensure developers get the insights they need exactly when they need them. Bio: Kasper is a CNCF Ambassador, former KubeCon+CloudNativeCon Co-Chair, Golden Kubestronaut, KCD Organizer, and CNCG Group Organizer. He co-founded Cloud Native Nordics to unite meetups across the region. As a Principal Developer Advocate at Dash0, he helps make observability easy for developers by advocating for better tooling, best practices, and seamless integrations. 2nd Talk: Title: Lessons learn creating Walrus (High Performance kafka alternative written in Rust)" by Daksh R, Description: Bio: Photography/Video consent: We will be taking photos and videos during the event and will use these photos and videos for social media and promotional materials. By coming to the meet-up, you give us your consent to take photos and videos of you. Code of Conduct: All members are required to agree with the Berlin Code of Conduct. Directions: The meetup will take place at Mindstone Amsterdam offices in Singel. Parking is limited, but the venue is easily accessible by public transport or by bike. Important notes:
|
Cloud Native + AI Winter edition!
|
|
Deep Agents with LangGraph: From Planning to Persistent Reasoning
2025-12-10 · 20:00
Understanding Deep Agents: The Future of AI AutonomyThe next evolution of AI agents is here. Deep Agents move beyond simple tool-calling LLMs into powerful, stateful systems that can reason over time, collaborate across sub-tasks, and deliver reliable results in real applications. This session will break down what Deep Agents are, why they matter, and how LangGraph makes them practical to build today. We’ll explore the limitations of traditional agents that lose context, fail on long-running tasks, or collapse without human intervention. Then we’ll introduce the four pillars of Deep Agents: planning, sub-agents, memory and state, and a virtual file system that enables durable workflows. A live walkthrough will show how LangGraph helps developers orchestrate scalable, production-ready Deep Agents with human oversight, observability, and debugging built in. You’ll learn how to structure persistent reasoning, delegate tasks effectively, and maintain state across complex workflows. What We Will Cover:• Why traditional agents fall short on multi-step, long-running tasks • The architecture of Deep Agents and how each pillar supports persistent reasoning • How LangGraph enables stateful agents with feedback loops, scaling, and error recovery • Building a Deep Agent step-by-step — planning, delegation, and memory management • Real-world use cases from research automation to decision support systems • Key considerations and safety mechanisms when deploying Deep Agents in production Hands-On Insights:Through examples and Q&A, participants will learn how to start building Deep Agents in their own environments using LangGraph. You’ll leave with the mental model, tools, and practical patterns to evolve your agent systems from simple demos into durable, intelligent applications. |
Deep Agents with LangGraph: From Planning to Persistent Reasoning
|
|
Deep Agents with LangGraph: From Planning to Persistent Reasoning
2025-12-10 · 20:00
Understanding Deep Agents: The Future of AI AutonomyThe next evolution of AI agents is here. Deep Agents move beyond simple tool-calling LLMs into powerful, stateful systems that can reason over time, collaborate across sub-tasks, and deliver reliable results in real applications. This session will break down what Deep Agents are, why they matter, and how LangGraph makes them practical to build today. We’ll explore the limitations of traditional agents that lose context, fail on long-running tasks, or collapse without human intervention. Then we’ll introduce the four pillars of Deep Agents: planning, sub-agents, memory and state, and a virtual file system that enables durable workflows. A live walkthrough will show how LangGraph helps developers orchestrate scalable, production-ready Deep Agents with human oversight, observability, and debugging built in. You’ll learn how to structure persistent reasoning, delegate tasks effectively, and maintain state across complex workflows. What We Will Cover:• Why traditional agents fall short on multi-step, long-running tasks • The architecture of Deep Agents and how each pillar supports persistent reasoning • How LangGraph enables stateful agents with feedback loops, scaling, and error recovery • Building a Deep Agent step-by-step — planning, delegation, and memory management • Real-world use cases from research automation to decision support systems • Key considerations and safety mechanisms when deploying Deep Agents in production Hands-On Insights:Through examples and Q&A, participants will learn how to start building Deep Agents in their own environments using LangGraph. You’ll leave with the mental model, tools, and practical patterns to evolve your agent systems from simple demos into durable, intelligent applications. |
Deep Agents with LangGraph: From Planning to Persistent Reasoning
|
|
Deep Agents with LangGraph: From Planning to Persistent Reasoning
2025-12-10 · 20:00
Understanding Deep Agents: The Future of AI AutonomyThe next evolution of AI agents is here. Deep Agents move beyond simple tool-calling LLMs into powerful, stateful systems that can reason over time, collaborate across sub-tasks, and deliver reliable results in real applications. This session will break down what Deep Agents are, why they matter, and how LangGraph makes them practical to build today. We’ll explore the limitations of traditional agents that lose context, fail on long-running tasks, or collapse without human intervention. Then we’ll introduce the four pillars of Deep Agents: planning, sub-agents, memory and state, and a virtual file system that enables durable workflows. A live walkthrough will show how LangGraph helps developers orchestrate scalable, production-ready Deep Agents with human oversight, observability, and debugging built in. You’ll learn how to structure persistent reasoning, delegate tasks effectively, and maintain state across complex workflows. What We Will Cover:• Why traditional agents fall short on multi-step, long-running tasks • The architecture of Deep Agents and how each pillar supports persistent reasoning • How LangGraph enables stateful agents with feedback loops, scaling, and error recovery • Building a Deep Agent step-by-step — planning, delegation, and memory management • Real-world use cases from research automation to decision support systems • Key considerations and safety mechanisms when deploying Deep Agents in production Hands-On Insights:Through examples and Q&A, participants will learn how to start building Deep Agents in their own environments using LangGraph. You’ll leave with the mental model, tools, and practical patterns to evolve your agent systems from simple demos into durable, intelligent applications. |
Deep Agents with LangGraph: From Planning to Persistent Reasoning
|
|
🎄Christmas Special🎄 Week 5 :Small Extensions & New Tools for Your Agent
2025-12-06 · 09:00
🎄 Build & Learn: Christmas Special- "Agents & the Future of Work" Join us for Week 5 of the Agentic Flow 7-Week Cycle — Small Extensions & New Tools for Your Agent In Week 5, we keep things simple and supportive. Many people are still shaping their proof-of-concepts, so this session focuses on strengthening what you already built and adding small enhancements that make your workflow more capable. 💡 You can join anytime! Just pick one of the frameworks below and we’ll connect you with others using the same stack so you can learn and build together:
Each week we’ll introduce a short lecture or demo on one practical sub-topic (i.e. observability, memory, cost tracking, evaluations etc), then spend the rest of the session working in small groups to experiment and share progress. Weekly flow:
Hands On Build Time Choose ONE improvement path: 1️⃣ Add one new tool: API call · file reader · search tool · classifier 2️⃣ Improve stability Guardrails · retries · output validation 3️⃣ Optimize context Shorten instructions · summarize memory · reduce token cost 4️⃣ Reduce latency/cost Caching · fewer model calls · fallback models ✨ Who’s Hosting? I’m Lindsey, a senior data scientist working on AI, causal inference, and data products. I’ve built models for fraud detection, uplift modeling, and LLM applications. 📅 When? Saturday Dec 13, 10:00 AM- 1 PM 📍 Where? : Joachimsthaler Str. 43, 10623 Berlin ComebuyTEA 💻 Bring: Your laptop, an idea, or just curiosity! 👩💻 **No experience needed—just curiosity.**Grab a coffee, meet cool people, and work on something fun. |
🎄Christmas Special🎄 Week 5 :Small Extensions & New Tools for Your Agent
|
|
🎄Christmas Special🎄 Week 4 : Optimization, Debugging & Measuring Improvement
2025-11-30 · 09:00
🎄 Build & Learn: Christmas Special- "Agents & the Future of Work" Join us for Week 4 of the Agentic Flow 7-Week Cycle — In Week 4, we move from getting things working → making them work better. You now have a dataset, baseline metric, and a working API connection—so this week we’ll focus on improving a real part of your workflow. 💡 You can join anytime! Just pick one of the frameworks below and we’ll connect you with others using the same stack so you can learn and build together:
Each week we’ll introduce a short lecture or demo on one practical sub-topic (i.e. observability, memory, cost tracking, evaluations etc), then spend the rest of the session working in small groups to experiment and share progress. Weekly flow:
Hands-On Build Time1️⃣ Debug your workflow 2️⃣ Improve one chosen metric (accuracy, speed, cost, consistency) ✨ Who’s Hosting? I’m Lindsey, a senior data scientist working on AI, causal inference, and data products. I’ve built models for fraud detection, uplift modeling, and LLM applications. 📅 When? Sunday , Nov 30, 10:00 AM- 11:30 AM 📍 Where? : Washingtonpl. 3, 10557 Berlin, Germany 💻 Bring: Your laptop, an idea, or just curiosity! 👩💻 **No experience needed—just curiosity.**Grab a coffee, meet cool people, and work on something fun. |
🎄Christmas Special🎄 Week 4 : Optimization, Debugging & Measuring Improvement
|
|
AI Ops with DataBricks. Accelerating AI value in production. Part1
2025-11-26 · 18:00
Stop Experimenting. Start Delivering. Go from Proof-of-Concept to Production. Join us for Part 1 of our AI Ops with Databricks series, where we explore how to move from AI experimentation to real, production-scale value. In this session, Wynand Jordaan and John Beddow will unpack the practical steps, architecture patterns, and operational best practices for delivering AI at scale using Databricks. We’ll discuss:
Whether you’re a data engineer, ML practitioner, or tech leader looking to scale AI impact, this session will give you actionable insights and a roadmap to production success. Speakers:
Join us to learn, share ideas, and connect with others shaping the future of AI operations. |
AI Ops with DataBricks. Accelerating AI value in production. Part1
|
|
🎄Christmas Special🎄 Week 3 : Work through A real dataset
2025-11-22 · 09:00
🎄 Build & Learn: Christmas Special- "Agents & the Future of Work" Join us for Week 3 of the Agentic Flow 7-Week Cycle — this week we’ll solidify your project foundations by choosing the right dataset, defining a baseline metric, and getting your authentication + API calls fully working. 💡 You can join anytime! Just pick one of the frameworks below and we’ll connect you with others using the same stack so you can learn and build together:
Each week we’ll introduce a short lecture or demo on one practical sub-topic (i.e. observability, memory, cost tracking, evaluations etc), then spend the rest of the session working in small groups to experiment and share progress. Weekly flow:
Week 3 Hands on Building Time 1️⃣ Fix authentication + API access issues 2️⃣ Pick the right dataset or workflow source 3️⃣ Define a baseline + evaluation metric
✨ Who’s Hosting? I’m Lindsey, a senior data scientist working on AI, causal inference, and data products. I’ve built models for fraud detection, uplift modeling, and LLM applications. 📅 When? Saturday , Nov 22, 10:00 AM- 12:00 PM 📍 Where? : Wework Kemperplatz 1 Mitte D, 10785 Berlin 💻 Bring: Your laptop, an idea, or just curiosity! 👩💻 **No experience needed—just curiosity.**Grab a coffee, meet cool people, and work on something fun. |
🎄Christmas Special🎄 Week 3 : Work through A real dataset
|
|
Embrace the future of software engineering to drive innovation
2025-11-19 · 19:30
Mark Tomlinson
@ FreedomPay
,
Alois Reitbauer
@ Dynatrace
Are you leveraging agentic AI on Microsoft Azure but struggling with visibility and performance? Join us for a 20-minute session where we'll explore how Dynatrace provides impactful observability for your AI workloads. Learn how FreedomPay, a leader in FinTech, uses Dynatrace to gain a holistic view of transactions, reduce resolution times by 80%, scale across AI workloads and optimize performance to drive innovation. |
Microsoft Ignite 2025
|
|
State, Scale, and Signals: Rethinking Orchestration with Durable Execution
2025-11-16 · 23:19
Preeti Somal
– EVP of Engineering
@ Temporal
,
Tobias Macey
– host
Summary In this episode Preeti Somal, EVP of Engineering at Temporal, talks about the durable execution model and how it reshapes the way teams build reliable, stateful systems for data and AI. She explores Temporal’s code‑first programming model—workflows, activities, task queues, and replay—and how it eliminates hand‑rolled retry, checkpoint, and error‑handling scaffolding while letting data remain where it lives. Preeti shares real-world patterns for replacing DAG-first orchestration, integrating application and data teams through signals and Nexus for cross-boundary calls, and using Temporal to coordinate long-running, human-in-the-loop, and agentic AI workflows with full observability and auditability. Shee also discusses heuristics for choosing Temporal alongside (or instead of) traditional orchestrators, managing scale without moving large datasets, and lessons from running durable execution as a cloud service. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData teams everywhere face the same problem: they're forcing ML models, streaming data, and real-time processing through orchestration tools built for simple ETL. The result? Inflexible infrastructure that can't adapt to different workloads. That's why Cash App and Cisco rely on Prefect. Cash App's fraud detection team got what they needed - flexible compute options, isolated environments for custom packages, and seamless data exchange between workflows. Each model runs on the right infrastructure, whether that's high-memory machines or distributed compute. Orchestration is the foundation that determines whether your data team ships or struggles. ETL, ML model training, AI Engineering, Streaming - Prefect runs it all from ingestion to activation in one platform. Whoop and 1Password also trust Prefect for their data operations. If these industry leaders use Prefect for critical workflows, see what it can do for you at dataengineeringpodcast.com/prefect.Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Composable data infrastructure is great, until you spend all of your time gluing it together. Bruin is an open source framework, driven from the command line, that makes integration a breeze. Write Python and SQL to handle the business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. Bruin allows you to build end-to-end data workflows using AI, has connectors for hundreds of platforms, and helps data teams deliver faster. Teams that use Bruin need less engineering effort to process data and benefit from a fully integrated data platform. Go to dataengineeringpodcast.com/bruin today to get started. And for dbt Cloud customers, they'll give you $1,000 credit to migrate to Bruin Cloud.Your host is Tobias Macey and today I'm interviewing Preeti Somal about how to incorporate durable execution and state management into AI application architectures Interview IntroductionHow did you get involved in the area of data management?Can you describe what durable execution is and how it impacts system architecture?With the strong focus on state maintenance and high reliability, what are some of the most impactful ways that data teams are incorporating tools like Temporal into their work?One of the core primitives in Temporal is a "workflow". How does that compare to similar primitives in common data orchestration systems such as Airflow, Dagster, Prefect, etc.? What are the heuristics that you recommend when deciding which tool to use for a given task, particularly in data/pipeline oriented projects? Even if a team is using a more data-focused orchestration engine, what are some of the ways that Temporal can be applied to handle the processing logic of the actual data?AI applications are also very dependent on reliable data to be effective in production contexts. What are some of the design patterns where durable execution can be integrated into RAG/agent applications?What are some of the conceptual hurdles that teams experience when they are starting to adopt Temporal or other durable execution frameworks?What are the most interesting, innovative, or unexpected ways that you have seen Temporal/durable execution used for data/AI services?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Temporal?When is Temporal/durable execution the wrong choice?What do you have planned for the future of Temporal for data and AI systems? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story. Links TemporalDurable ExecutionFlinkMachine Learning EpochSpark StreamingAirflowDirected Acyclic Graph (DAG)Temporal NexusTensorZeroAI Engineering Podcast Episode The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA |
Data Engineering Podcast |
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
Nov 13 - Women in AI
2025-11-13 · 17:00
Hear talks from experts on the latest topics in AI, ML, and computer vision on November 13. Date and Location Nov 13, 2025 9 AM Pacific Online. Register for the Zoom! Copy, Paste, Customize! The Template Approach to AI Engineering Most AI implementations fail because teams treat prompt engineering as ad-hoc experimentation rather than systematic software engineering, leading to unreliable systems that don't scale beyond proof-of-concepts. This talk demonstrates engineering practices that enable reliable AI deployment through standardized prompt templates, systematic validation frameworks, and production observability. Drawing from experience developing fillable prompt templates currently being validated in production environments processing thousands of submissions, I'll share how Infrastructure as Code principles apply to LLM workflows, why evaluation metrics like BLEU scores are critical for production reliability, and how systematic failure analysis prevents costly deployment issues. Attendees will walk away with understanding of practical frameworks for improving AI system reliability and specific strategies for building more consistent, scalable AI implementations. About the Speaker Jeanne McClure is a postdoctoral scholar at NC State's Data Science and AI Academy with expertise in systematic AI implementation and validation. Her research transforms experimental AI tools into reliable production systems through standardized prompt templates, rigorous testing frameworks, and systematic failure analysis. She holds a PhD in Learning, Design and Technology with additional graduate work in data science. Multimodality with Biases: Understand and Evaluate VLMs for Autonomous Driving with FiftyOne Do your VLMs really see danger? With FiftyOne, I’ll show you how to understand and evaluate vision-language models for autonomous driving — making risk and bias visible in seconds. We’ll compare models on the same scenes, reveal failures and edge cases, and you’ll see a simple dashboard to decide which data to curate and what to adjust. You’ll leave with a clear, practical, and replicable method to raise the bar for safety. About the Speaker Paula Ramos has a PhD in Computer Vision and Machine Learning, with more than 20 years of experience in the technological field. She has been developing novel integrated engineering technologies, mainly in Computer Vision, robotics, and Machine Learning applied to agriculture, since the early 2000s in Colombia. The Heart of Innovation: Women, AI, and the Future of Healthcare This session explores how Artificial Intelligence is transforming healthcare by enhancing diagnosis, treatment, and patient outcomes. It highlights the importance of diverse and female perspectives in shaping AI solutions that are ethical, empathetic, and human-centered. We will discuss key applications, current challenges, and the future potential of AI in medicine. It’s a forward-looking conversation about how innovation can build a healthier world. About the Speaker Karen Sanchez is a Postdoctoral Researcher at the Center of Excellence for Generative AI at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Her research focuses on AI for Science, spanning computer vision, video understanding, and privacy-preserving machine learning. She is also an active advocate for diversity and outreach in AI, contributing to global initiatives that connect researchers and amplify underrepresented voices in technology. Language Diffusion Models Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue. About the Speaker Jayita Bhattacharyya is an AI/ML Nerd with a blend of technical speaking & hackathon wizardry! Applying tech to solve real-world problems. The work focus these days is on generative AI. Helping software teams incorporate AI into transforming software engineering. |
Nov 13 - Women in AI
|
|
🎄Christmas Special🎄 Week 2 : Build Your First Agentic Workflow
2025-11-08 · 09:00
🎄 Build & Learn: Christmas Special- "Agents & the Future of Work" Join us for Week 2 of the Agentic Flow 7-Week Cycle — we’ll each build a simple working agentic workflow using a framework of our choice and real data. 💡 You can join anytime! Just pick one of the frameworks below and we’ll connect you with others using the same stack so you can learn and build together:
Each week we’ll introduce a short lecture or demo on one practical sub-topic (i.e. observability, memory, cost tracking, evaluations etc), then spend the rest of the session working in small groups to experiment and share progress. Weekly flow:
Week 2— Build your first end-to-end working pipeline. We’ll: 1️⃣ Revisit key ideas behind agentic flow (quick recap & Q&A) 2️⃣ Choose or define your dataset/workflow 3️⃣ Build your first end-to-end working pipeline — something small but functional 4️⃣ Share early results and blockers ✨ Who’s Hosting? I’m Lindsey, a senior data scientist working on AI, causal inference, and data products. I’ve built models for fraud detection, uplift modeling, and LLM applications. 📅 When? Saturday , Nov 8, 10:00 AM- 12:00 PM 📍 Where? : Wework Kemperplatz 1 Mitte D, 10785 Berlin 💻 Bring: Your laptop, an idea, or just curiosity! 👩💻 **No experience needed—just curiosity.**Grab a coffee, meet cool people, and work on something fun. |
🎄Christmas Special🎄 Week 2 : Build Your First Agentic Workflow
|
|
IMPACT 2025 Virtual Summit for Data and AI Observability
2025-11-06 · 17:30
Important: Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). If you can't make to the live session, still register to receive recordings. Description: Join Monte Carlo for IMPACT 2025, our flagship virtual summit on data and AI observability. Hear from the most forward-thinking leaders as they share how to build resilient, trustworthy systems across the modern data and AI estate. Whether you’re leading enterprise AI initiatives, managing large-scale data platforms, or tackling governance and compliance, IMPACT Virtual is your front-row seat to the next era of trusted data and AI. Join the half-day event to hear:
Register on the event website to receive joining link. (rsvp on meetup will NOT receive joining link). |
IMPACT 2025 Virtual Summit for Data and AI Observability
|