talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (280 results)

See all 280 →

Activities & events

Title & Speakers Event

Stream #35: Production-grade AI agents with SurrealDB & Pydantic AI

Explore the practical challenges of taking Graph-RAG agents from experiment to production with SurrealDB and PydanticAI. Unlike simple RAGs, these agents need durable, queryable memory, checked rules for agent requests and responses, and observability that surfaces retrieval and validation drift. In this session SurrealDB and PydanticAI show a compact, reproducible pattern - build a KG, run a typed pai-memory agent via the Gateway, and use Logfire/OTel to trace and diagnose failures.

About SurrealDB Stream SurrealDB Stream is our ongoing developer-focused livestream series on building AI-ready applications with SurrealDB, featuring hands-on demos, code examples and fireside discussions. Explore our past episodes here: https://www.youtube.com/playlist?list=PLvuQflRR4UzblK6jpMoFE_MuJ8JZ2e14_

About the speakers

Samuel Colvin - Pydantic, CEO & Founder Samuel Colvin is a Python and Rust developer and creator of Pydantic. The Pydantic library, which he created is downloaded over 300M/month and is a dependency of many GenAI Python libraries including the OpenAI SDK, the Anthropic SDK, Langchain, AutoGPT, instructor and LlamaIndex.

Tobie Morgan Hitchcock - SurrealDB, CEO & co-founder Tobie Morgan Hitchcock is CEO & Co-Founder of SurrealDB. He is an experienced tech entrepreneur, developer, and software engineer, with 17 years’ experience in the software and cloud-computing industries. In 2021 he founded SurrealDB, with the aim of building the ultimate cloud database for tomorrow's applications. He has experience in a wide range of different software stacks and development languages, with a focus on distributed databases, and highly-available architectures.

Martin Schaer - SurrealDB, Solutions Engineer Martin is a computer science engineer working at SurrealDB and his GenAI startup. He recently worked in lab automation, where he designed and developed a declarative framework for instrument drivers and a 3D visualiser for testing robotic transport solutions. His background also includes founding a successful advertising agency in Costa Rica and extensive work in web development, UX, branding, and digital marketing.

Stream #35: Production-grade AI agents with SurrealDB & Pydantic AI

Stream #35: Production-grade AI agents with SurrealDB & Pydantic AI

Explore the practical challenges of taking Graph-RAG agents from experiment to production with SurrealDB and PydanticAI. Unlike simple RAGs, these agents need durable, queryable memory, checked rules for agent requests and responses, and observability that surfaces retrieval and validation drift. In this session SurrealDB and PydanticAI show a compact, reproducible pattern - build a KG, run a typed pai-memory agent via the Gateway, and use Logfire/OTel to trace and diagnose failures.

About SurrealDB Stream SurrealDB Stream is our ongoing developer-focused livestream series on building AI-ready applications with SurrealDB, featuring hands-on demos, code examples and fireside discussions. Explore our past episodes here: https://www.youtube.com/playlist?list=PLvuQflRR4UzblK6jpMoFE_MuJ8JZ2e14_

About the speakers

Samuel Colvin - Pydantic, CEO & Founder Samuel Colvin is a Python and Rust developer and creator of Pydantic. The Pydantic library, which he created is downloaded over 300M/month and is a dependency of many GenAI Python libraries including the OpenAI SDK, the Anthropic SDK, Langchain, AutoGPT, instructor and LlamaIndex.

Tobie Morgan Hitchcock - SurrealDB, CEO & co-founder Tobie Morgan Hitchcock is CEO & Co-Founder of SurrealDB. He is an experienced tech entrepreneur, developer, and software engineer, with 17 years’ experience in the software and cloud-computing industries. In 2021 he founded SurrealDB, with the aim of building the ultimate cloud database for tomorrow's applications. He has experience in a wide range of different software stacks and development languages, with a focus on distributed databases, and highly-available architectures.

Martin Schaer - SurrealDB, Solutions Engineer Martin is a computer science engineer working at SurrealDB and his GenAI startup. He recently worked in lab automation, where he designed and developed a declarative framework for instrument drivers and a 3D visualiser for testing robotic transport solutions. His background also includes founding a successful advertising agency in Costa Rica and extensive work in web development, UX, branding, and digital marketing.

Stream #35: Production-grade AI agents with SurrealDB & Pydantic AI

Stream #35: Production-grade AI agents with SurrealDB & Pydantic AI

Explore the practical challenges of taking Graph-RAG agents from experiment to production with SurrealDB and PydanticAI. Unlike simple RAGs, these agents need durable, queryable memory, checked rules for agent requests and responses, and observability that surfaces retrieval and validation drift. In this session SurrealDB and PydanticAI show a compact, reproducible pattern - build a KG, run a typed pai-memory agent via the Gateway, and use Logfire/OTel to trace and diagnose failures.

About SurrealDB Stream SurrealDB Stream is our ongoing developer-focused livestream series on building AI-ready applications with SurrealDB, featuring hands-on demos, code examples and fireside discussions. Explore our past episodes here: https://www.youtube.com/playlist?list=PLvuQflRR4UzblK6jpMoFE_MuJ8JZ2e14_

About the speakers

Samuel Colvin - Pydantic, CEO & Founder Samuel Colvin is a Python and Rust developer and creator of Pydantic. The Pydantic library, which he created is downloaded over 300M/month and is a dependency of many GenAI Python libraries including the OpenAI SDK, the Anthropic SDK, Langchain, AutoGPT, instructor and LlamaIndex.

Tobie Morgan Hitchcock - SurrealDB, CEO & co-founder Tobie Morgan Hitchcock is CEO & Co-Founder of SurrealDB. He is an experienced tech entrepreneur, developer, and software engineer, with 17 years’ experience in the software and cloud-computing industries. In 2021 he founded SurrealDB, with the aim of building the ultimate cloud database for tomorrow's applications. He has experience in a wide range of different software stacks and development languages, with a focus on distributed databases, and highly-available architectures.

Martin Schaer - SurrealDB, Solutions Engineer Martin is a computer science engineer working at SurrealDB and his GenAI startup. He recently worked in lab automation, where he designed and developed a declarative framework for instrument drivers and a 3D visualiser for testing robotic transport solutions. His background also includes founding a successful advertising agency in Costa Rica and extensive work in web development, UX, branding, and digital marketing.

Stream #35: Production-grade AI agents with SurrealDB & Pydantic AI

Stream #35: Production-grade AI agents with SurrealDB & Pydantic AI

Explore the practical challenges of taking Graph-RAG agents from experiment to production with SurrealDB and PydanticAI. Unlike simple RAGs, these agents need durable, queryable memory, checked rules for agent requests and responses, and observability that surfaces retrieval and validation drift. In this session SurrealDB and PydanticAI show a compact, reproducible pattern - build a KG, run a typed pai-memory agent via the Gateway, and use Logfire/OTel to trace and diagnose failures.

About SurrealDB Stream SurrealDB Stream is our ongoing developer-focused livestream series on building AI-ready applications with SurrealDB, featuring hands-on demos, code examples and fireside discussions. Explore our past episodes here: https://www.youtube.com/playlist?list=PLvuQflRR4UzblK6jpMoFE_MuJ8JZ2e14_

About the speakers

Samuel Colvin - Pydantic, CEO & Founder Samuel Colvin is a Python and Rust developer and creator of Pydantic. The Pydantic library, which he created is downloaded over 300M/month and is a dependency of many GenAI Python libraries including the OpenAI SDK, the Anthropic SDK, Langchain, AutoGPT, instructor and LlamaIndex.

Tobie Morgan Hitchcock - SurrealDB, CEO & co-founder Tobie Morgan Hitchcock is CEO & Co-Founder of SurrealDB. He is an experienced tech entrepreneur, developer, and software engineer, with 17 years’ experience in the software and cloud-computing industries. In 2021 he founded SurrealDB, with the aim of building the ultimate cloud database for tomorrow's applications. He has experience in a wide range of different software stacks and development languages, with a focus on distributed databases, and highly-available architectures.

Martin Schaer - SurrealDB, Solutions Engineer Martin is a computer science engineer working at SurrealDB and his GenAI startup. He recently worked in lab automation, where he designed and developed a declarative framework for instrument drivers and a 3D visualiser for testing robotic transport solutions. His background also includes founding a successful advertising agency in Costa Rica and extensive work in web development, UX, branding, and digital marketing.

Stream #35: Production-grade AI agents with SurrealDB & Pydantic AI

These are the notes of the previous "How to Build a Portfolio That Reflects Your Real Skills" event:

Properties of an ideal portfolio repository:

  • Built to prove employable skills and readiness for real work
  • Fewer projects, carefully chosen to match job requirements
  • Clean, readable, refactored code, and follows best practices
  • Detailed READMEs (setup, features, tech stack, decisions, how to deploy, testing strategy, etc)
  • Logical, meaningful commits that show development process <- you can follow the git history for important commits/features
  • Clear architecture (layers, packages, separation of concerns) <- use best practices
  • Unit and integration tests included and explained <-- also talk about them in the README
  • Proper validation, exceptions, and edge case handling
  • Polished, complete, production-like projects only
  • “Can this person work on our codebase?” <-- reviewers will ask this
  • Written for recruiters, hiring managers, and senior engineers
  • Uses industry-relevant and job-listed technologies <- tech stak should match the CV
  • Well-scoped, realistic features similar to real products
  • Consistent style, structure, and conventions across projects
  • Environment variables, clear setup steps, sample configs
  • Minimal, justified dependencies with clear versioning
  • Proper logging, and meaningful log messages
  • No secrets committed, basic security best practices applied
  • Shows awareness of scaling, performance, and future growth <- at least have a "possible improvements" section in the README
  • a list of ADRs explains design choices and trade-offs <- should be a part of the documentation

📌 Backend & Frontend Portfolio Project Ideas

These projects are intentionally reusable across tech stacks. Following tutorials and reusing patterns is expected — what matters is:

  • understanding the architecture
  • explaining trade-offs
  • documenting decisions clearly

☕ Junior Java Backend Developer (Spring Boot)

1. Shop Manager Application

A monolithic Spring Boot app designed with microservice-style boundaries. Features

  • Secure user registration & login
  • Role-based access control using JWT
  • REST APIs for:
  • Users
  • Products
  • Inventory
  • Orders
  • Automatic inventory updates when orders are placed
  • CSV upload for bulk product & inventory import
  • Clear service boundaries (UserService, OrderService, InventoryService, etc.)

Engineering Focus

  • Clean architecture (controllers, services, repositories)
  • Global exception handling
  • Database migrations (Flyway/Liquibase)
  • Unit & integration testing
  • Clear README explaining architecture decisions

2. Parallel Data Processing Engine

Backend service for processing large datasets efficiently. Features

  • Upload large CSV/log files
  • Split data into chunks
  • Process chunks in parallel using:
  • ExecutorService
  • CompletableFuture
  • Aggregate and return results

Demonstrates

  • Java concurrency
  • Thread pools & async execution
  • Performance optimization

3. Distributed Task Queue System

Simple async job processing system. Features

  • One service submits tasks
  • Another service processes them asynchronously
  • Uses Kafka or RabbitMQ
  • Tasks: report generation, data transformation

Demonstrates

  • Message-driven architecture
  • Async workflows
  • Eventual consistency

4. Rate Limiting & Load Control Service

Standalone service that protects APIs from abuse. Features

  • Token bucket or sliding window algorithms
  • Redis-backed counters
  • Per-user or per-IP limits

Demonstrates

  • Algorithmic thinking
  • Distributed state
  • API protection patterns

5. Search & Indexing Backend

Document or record search service. Features

  • In-memory inverted index
  • Text search, filters, ranking
  • Optional Elasticsearch integration

Demonstrates

  • Data structures
  • Read-optimized design
  • Trade-offs between custom vs external tools

6. Distributed Configuration & Feature Flag Service

Centralized config service for other apps. Features

  • Key-value configuration store
  • Feature flags
  • Caching & refresh mechanisms

Demonstrates

  • Caching strategies
  • Consistency vs availability trade-offs
  • System design for shared services

🐹 Mid-Level Go Backend Developer (Non-Kubernetes)

1. High-Throughput Event Processing Pipeline

Multi-stage concurrent pipeline. Features

  • HTTP/gRPC ingestion
  • Validation & transformation stages
  • Goroutines & channels
  • Worker pools, batching, backpressure
  • Graceful shutdown

2. Distributed Job Scheduler & Worker System

Async job execution platform. Features

  • Job scheduling & delayed execution
  • Retries & idempotency
  • Job states (pending, running, failed, completed)
  • Message queue or gRPC-based workers

3. In-Memory Caching Service

Redis-like cache written from scratch. Features

  • TTL support
  • Eviction strategies (LRU/LFU)
  • Concurrent-safe access
  • Optional disk persistence

4. Rate Limiting & Traffic Shaping Gateway

Reverse-proxy-style rate limiter. Features

  • Token bucket / leaky bucket
  • Circuit breakers
  • Redis-backed distributed limits

5. Log Aggregation & Query Engine

Incrementally built system: Step-by-step

  1. REST API + Postgres (store logs, query logs)
  2. Optimize for massive concurrency
  3. Replace DB with in-memory data structures
  4. Add streaming endpoints using channels & batching

🐍 Mid-Level Python Backend Developer

1. Asynchronous Task Processing System

Async job execution platform. Features

  • Async API submission
  • Worker pool (asyncio or Celery-like)
  • Retries & failure handling
  • Job status tracking
  • Idempotency

2. Event-Driven Data Pipeline

Streaming data processing service. Features

  • Event ingestion
  • Validation & transformation
  • Batching & backpressure handling
  • Output to storage or downstream services

3. Distributed Rate Limiting Service

API protection service. Steps

  • Step 1: Use an existing rate-limiting library
  • Step 2: Implement token bucket / sliding window yourself

4. Search & Indexing Backend

Search system for logs or documents. Features

  • Custom indexing or Elasticsearch
  • Filtering & time-based queries
  • Read-heavy optimization

5. Configuration & Feature Flag Service

Shared configuration backend. Steps

  • Step 1: Use a caching library
  • Step 2: Implement your own cache (explain in README)

🟦 Mid-Level TypeScript Backend Developer

1. Asynchronous Job Processing System

Queue-based task execution. Features

  • BullMQ / RabbitMQ / Redis
  • Retries & scheduling
  • Status tracking

2. Real-Time Chat / Notification Service

WebSocket-based system. Features

  • Presence tracking
  • Message persistence
  • Real-time updates

3. Rate Limiting & API Gateway

API gateway with protections. Features

  • Token bucket / sliding window
  • Response caching
  • Request logging

4. Search & Filtering Engine

Search backend for products, logs, or articles. Features

  • In-memory index or Elasticsearch
  • Pagination & sorting

5. Feature Flag & Configuration Service

Centralized config management. Features

  • Versioning
  • Rollout strategies
  • Caching

🟨 Mid-Level Node.js Backend Developer

1. Async Task Queue System

Background job processor. Features

  • Bull / Redis / RabbitMQ
  • Retries & scheduling
  • Status APIs

2. Real-Time Chat / Notification Service

Socket-based system. Features

  • Rooms
  • Presence tracking
  • Message persistence

3. Rate Limiting & API Gateway

Traffic control service. Features

  • Per-user/API-key limits
  • Logging
  • Optional caching

4. Search & Indexing Backend

Indexing & querying service.


5. Feature Flag / Configuration Service

Shared backend for app configs.


⚛️ Mid-Level Frontend Developer (React / Next.js)

1. Dynamic Analytics Dashboard

Interactive data visualization app. Features

  • Charts & tables
  • Filters & live updates
  • React Query / Redux / Zustand
  • Responsive layouts

2. E-Commerce Store

Full shopping experience. Features

  • Product listings
  • Search, filters, sorting
  • Cart & checkout
  • SSR/SSG with Next.js

3. Real-Time Chat / Collaboration App

Live multi-user UI. Features

  • WebSockets or Firebase
  • Presence indicators
  • Real-time updates

4. CMS / Blogging Platform

SEO-focused content app. Features

  • SSR for SEO
  • Markdown or API-based content
  • Admin editing interface

5. Personalized Analytics / Recommendation UI

Data-heavy frontend. Features

  • Filtering & lazy loading
  • Large dataset handling
  • User-specific insights

6. AI Chatbot App — “My House Plant Advisor”

LLM-powered assistant with production-quality UX. Core Features

  • Chat interface with real-time updates
  • Input normalization & validation
  • Offensive content filtering
  • Unsupported query detection
  • Rate limiting (per user)
  • Caching recent queries
  • Conversation history per session
  • Graceful fallbacks & error handling

Advanced Features

  • Prompt tuning (beginner vs expert users)
  • Structured advice formatting (cards, bullets)
  • Local LLM support
  • Analytics dashboard (popular questions)
  • Voice input/output (speech-to-text, TTS)

✅ Final Advice

You do NOT need to build everything. Instead, pick 1–2 strong projects per role and focus on depth:

  • Explain the architecture clearly
  • Document trade-offs (why you chose X over Y)
  • Show incremental improvements
  • Prove you understand why, not just how

📌 Portfolio Quality Signals (Very Important)

  • Have a large, organic commit history → A single or very few commits is a strong indicator of copy-paste work.
  • Prefer 3–5 complex projects over 20 simple ones → Many tiny projects often signal shallow understanding.

🎯 Why This Helps in Interviews

Working on serious projects gives you:

  • Real hands-on practice
  • Concrete anecdotes (stories you can tell in interviews)
  • A safe way to learn technologies you don’t fully know yet
  • Better focus and long-term learning discipline
  • A portfolio that can be ported to another tech stack later (Java → Go, Node → Python, etc.)

🎥 Demo & Documentation Best Practices

  • Create a 2–3 minute demo / walkthrough video
  • Show the app running
  • Explain what problem it solves
  • Highlight one or two technical decisions
  • At the top of every README:
  • Add a plain-English paragraph explaining what the project does
  • Assume the reader is a complete beginner

🤝 Open Source & Personal Projects (Interview Signal)

Always mention that you have contributed to Open Source or built personal projects.

  • Shows team spirit
  • Shows you can read, understand, and navigate an existing codebase
  • Signals that you can onboard into a real-world repository
  • Makes you sound like an engineer, not just a tutorial follower
[Notes]How to Build a Portfolio That Reflects Your Real Skills

Google SRE NYC proudly announces our last Google SRE NYC Tech Talk for 2025.

This event is co-sponsored by sentry.io. Thank you Sentry for your partnership!

Let's farewell 2025 with three amazing interactive short talks on Site Reliability and DevOps topics! As always the event will include an opportunity to mingle with the speakers and attendees over some light snacks and beverages after the talks.

The Meetup will take place on Tuesday, 16th of December 2025 at 6:00 PM at our Chelsea Markets office in NYC. The doors will open at 5:30 pm. Pls RSVP only if you're able to attend in-person, there will be no live streaming.

When RSVP'ing to this event, please enter your full name exactly as it appears on your government issued ID. You will be required to present your ID at check in.

Agenda: Paul Jaffre - Senior Developer Experience Engineer\, sentry.io One Trace to Rule Them All: Unifying Sentry Errors with OpenTelemetry tracing SREs face the challenge of operating reliable observability infrastructure while avoiding vendor lock-in from proprietary APM (Application Performance Monitoring) solutions. OpenTelemetry has become the standard for instrumenting applications, allowing teams to collect traces, metrics, and logs. But raw telemetry data isn't enough. SREs need tools to visualize, debug, and respond to production incidents quickly. Sentry now supports OTLP, enabling teams to send OpenTelemetry data directly to Sentry for analysis. This talk covers how Sentry's OTLP support works in practice: connecting frontend and backend traces across services, correlating logs with distributed traces, and using tools to identify slow queries and performance bottlenecks. We'll discuss the practical benefits for SREs, like faster incident resolution, better cross-team debugging, and the flexibility to change observability backends without re-instrumenting code. Paul’s background spans engineering, product management, UX design, and open source. He has a soft spot for dev tools and loses sleep over making things easy to understand and use. Paul has a dynamic professional background, from strategy to stability. His time at Krossover Intelligence established a strong foundation by blending Product Management with hands-on development, and he later focused on core reliability at MakerBot, where he implemented automated end-to-end testing and drove performance improvements. He then extended this expertise in stability and scale at Cypress.io, where he served as a Developer Experience Engineer, focusing on improving workflow, contribution, and usability for their widely adopted open-source community.

Thiara Ortiz - Cloud Gaming SRE Manager\, Netflix Managing Black Box Systems SREs often face ambiguity when managing black box systems (LLMs, Games, Poorly Understood Dependencies). We will discuss how Netflix monitors service health as black boxes using multiple measurement techniques to understand system behavior, aligning with the need for robust observability tools. These strategies are crucial for system reliability and user experience. By proactively identifying and resolving issues, we ensure smoother playback experience and maintain user trust, even as the platform continues to evolve and gain maturity. The principles shared within this talk can be expanded to other applications such as AI reliability in data quality and model deployments.

Thiara has worked at some of the largest internet companies in the world, Meta and Netflix. During her time at Meta, Thiara found a passion for distributed systems and bringing new hardware into production. Always curious to explore new solutions to complex problems, Thiara developed Fleet Scanner, internally known as Lemonaid, to perform memory, compute, and storage benchmarks on each Meta server in production. This service runs on over 5 million servers and continues to be utilized at Meta. Since Meta, Thiara has been working at Netflix as a Senior CDN Reliability engineer, and now, Cloud Gaming SRE Manager. When incidents occur and Netflix's systems do not behave as expected, Thiara can be found working and engaging the necessary teams to remediate these issues.

Andrew Espira - Platform and Site Reliability Engineer\, Founding Engineer kustode ML-Powered Predictive SRE: Using Behavioral Signals to Prevent Cluster Inefficiencies Before They Impact Production SREs managing ML clusters often discover resource inefficiencies and queue bottlenecks only after they've impacted production services. This talk presents a machine learning approach to predict these issues before they occur, transforming SRE from reactive firefighting to proactive system optimization. We demonstrate how to build predictive models using production cluster traces that identify two critical failure modes: (1) GPU under-utilization relative to requested resources, and (2) abnormal queue wait times that indicate impending service degradation. The SRE practitioners will learn how to extract early warning indicators from standard cluster logs, build ML models that provide actionable confidence scores for operational decisions, and take practical steps to integrate predictive analytics into existing SRE toolchains to achieve 50%+ reduction in resource waste and queue-related incidents This talk bridges the gap between traditional SRE observability and modern predictive analytics, showing how teams can evolve from reactive monitoring to intelligent, forward-looking reliability engineering" Andrew has over 8 years of experience architecting and maintaining large-scale distributed systems. He is the Founding Engineer of Kustode (kustode.com), where he develops cutting-edge reliability and observability solutions for modern infrastructure in the Insurance and health care solutions space. Currently pursuing graduate studies in Data Science at Saint Peter's University, he specializes in the intersection of reliability engineering and artificial intelligence. His research focuses on applying machine learning to operational challenges, with publications in peer-reviewed venues including ScienceDirect. He's passionate about making complex systems more predictable and maintainable through data-driven approaches. When not optimizing cluster performance or building the next generation of observability tools, Andrew enjoys contributing to open-source projects and mentoring early-career engineers in the SRE community.

Our Tech Talks series are for professional development and networking: no recruiters, sales or press please! Google is committed to providing a harassment-free and inclusive conference experience for everyone, and all participants must follow our Event Community Guidelines. The event will be photographed and video recorded.

Event space is limited! A reservation is required to attend. Reserve your spot today and share the event details with your SRE/DevOps friends 🙂

Google NY Site Reliability Engineering (SRE) Tech Talks, 16 Dec 2025

Come get the ⚡️AI Spark⚡️ with NYC Women in Machine Learning and Data Science! We are wrapping up the year with an evening of inspiration, demos, learning and connection.

This week is also the New York AI Summit (12/10-12/11) and we have 1 free pass to give to one of our members (provided by NYAI). RSVP to our demo night by 5pm today to be entered into the raffle to win the ticket. One RSVP'd member will be selected by 5pm and will be emailed their ticket. NYAI is our partner for the New York AI Summit.

What to Expect:

  • 6:00pm - 6:30pm: Warm-up & Introduction (30 min)
  • Welcome remarks and overview of the evening
  • Brief introductions and housekeeping
  • 6:30pm - 6:45pm: Demo 1 (15 min: 10 min demo + 5 min Q&A)
  • Presenter: Gaëlle Agahozo (in-person)
  • Demo project: Using AI to sell branded clothes from Rwanda
  • 6:45pm - 7:00pm: Demo 2 (15 min: 10 min demo + 5 min Q&A)
  • Presenter: Amita Shukla (in-person)
  • Demo project: Adding soon.
  • 7:00pm - 7:15pm: Demo 3 (15 min: 10 min demo + 5 min Q&A)
  • Presenter: Neelanjana Dutta (remote)
  • Demo project: Juni - AI-powered parenting app
  • 7:15pm - 7:30pm: Demo 4 (15 min: 10 min demo + 5 min Q&A)
  • Presenter: Trupeer.ai (remote) (tentitive)
  • Demo project: AI video platform for your software lifecycle
  • 7:30pm - 8:00pm: Open Networking & Conversations (30 min)
  • Continue discussions with presenters and attendees
  • Exchange contacts and ideas

We'll do a couple AI demos to spark ideas and conversation, as well as general networking with fellow women in AI, Machine Learning and Data Science. Whether you're exploring AI casually or building something ambitious, this is a relaxed, welcoming space to learn from others and share what you're working on.

Bring a demo if you have one! This could be a side project, startup or a AI/ML project at your company -- demos won't be recorded. It doesn't have to be polished! We're demoing for support and community.

We will have time for adhoc demos, but if you want to save a dedicated spot early on, let us know what you are demoing here! This event is hosted by our wonderful partners at BrainStation.

BrainStation is a global leader in digital skills training and workforce transformation, offering certificate courses and bootcamps in disciplines such as Data Science, UX Design, Digital Marketing, and Product Management. In addition to education, BrainStation hosts a wide range of industry events, panel discussions, and thought leadership sessions that connect professionals, hiring partners, and industry leaders. With campuses in major cities and a strong online presence, BrainStation empowers individuals and organizations to thrive in the digital economy.

Stay connected:

AI Demo Night: Learn and Connect

From Tuesday, December 9 (11:30 - 18:00) to Wednesday, December 10, 2025 (08:45 – 18:00), Paris becomes the epicenter of context-integrated GraphRAG. Join top builders, researchers, and engineers to prototype the next generation of Model Context Protocol integrations for graphRAG databases — and explore how structured memory can unlock reasoning at scale.

Space is limited and we’re screening for hands-on talent. If you work with LLMs, RAG, or graph-based reasoning, secure your spot early — once full, the waitlist opens.

The Hackathon is part of Generation AI Paris 2025 — a 3-day explosion of innovation with 8,000+ attendees, 300+ sessions, and 100+ speakers. You’re getting the full package: keynotes, track sessions, workshops… and all the networking breakfasts and lunches. Golden Pass Value: Your acceptance grants you a free pass to GenerationAI, apidays, AND GreenIO Paris—that’s 4 conferences normally valued at 999€, for the price of zero: To access the apidays conference component, use this promo link for a Regular Ticket.

Model Context Protocol Integrations for GraphRAG We’re bridging the gap between language models and structured data systems. Expect a full day of experimentation around:

  • Context persistence across graph nodes
  • Dynamic retrieval and reasoning from structured sources
  • Hybrid memory architectures (Graph + RAG)
  • MCP interfaces for multi-agent collaboration and explainability
  • Visual query builders and interpretable context trails

Whether you’re a backend engineer, data scientist, or AI tinkerer, bring your boldest ideas — we’ll connect you with mentors and teammates to make them real. --- We’ll reward projects that are working, elegant, and context-smart. Extra points for:

  • Seamless MCP ↔ Graph integration
  • Context-aware reasoning or dynamic retrieval
  • Real-world usefulness and open-source potential
  • Clear UX for developers and data teams

--- The Hackathon is part of Generation AI Paris 2025 — a 3-day explosion of innovation with 8,000+ attendees, 300+ sessions, and 100+ speakers. You’re getting the full package: keynotes, track sessions, workshops… and all the networking breakfasts and lunches (because who codes on an empty stomach?). But here’s the kicker: your acceptance isn’t just for the hackathon. It’s a golden pass to GenerationAI, apidays, AND GreenIO Paris. That’s 4 conferences — normally valued at 999€ — for the price of zero. Yes… we said zero. You’re welcome. --- 🏆 What You Win: We recognize excellence in GraphRAG and MCP integration!

  1. Scalingo Credits: The overall winning team and the winners of the Scalingo track each receive a €270 coupon code for the PaaS.
  2. The Ultimate Pass: All attendees gain a free “Golden Pass” granting access to GenerationAI, apidays, and GreenIO Paris (a €999 value).

Come build smart context and walk away with credits and conference access! 🧠💳


The Hackathon is part of Generation AI Paris 2025 — a 3-day explosion of innovation with 8,000+ attendees, 300+ sessions, and 100+ speakers. You’re getting the full package: keynotes, track sessions, workshops… and all the networking breakfasts and lunches (because who codes on an empty stomach?). But here’s the kicker: your acceptance isn’t just for the hackathon. It’s a golden pass to GenerationAI, apidays, AND GreenIO Paris. That’s 4 conferences — normally valued at 999€ — for the price of zero. Yes… we said zero. You’re welcome.


Sponsors

Neo4j - Neo4j is a leading graph database platform that empowers AI developers to harness the power of connected data. Their technology enables efficient storage\, analysis\, and visualization of complex data relationships\, integrating seamlessly with popular AI and machine learning tools. Neo4j’s features\, including native graph storage and advanced algorithms\, make it ideal for powering AI applications like knowledge graphs and recommendation systems\, helping developers extract deeper insights from their data.

Scalingo is a European Platform as a Service (PaaS) designed to simplify cloud hosting and database management for developers. It enables rapid deployment, management, and scaling of web applications, supporting over 50 runtimes and integrating seamlessly with major databases like PostgreSQL, MySQL, and MongoDB. With features like auto-scaling, continuous deployment, real-time monitoring, and robust security, Scalingo empowers tech teams to focus on coding rather than infrastructure. New users can enjoy a 30-day free trial to experience its developer-friendly environment and top-notch support.

The Hackathon is part of Generation AI Paris 2025 — a 3-day explosion of innovation with 8,000+ attendees, 300+ sessions, and 100+ speakers. You’re getting the full package: keynotes, track sessions, workshops… and all the networking breakfasts and lunches. Golden Pass Value: Your acceptance grants you a free pass to GenerationAI, apidays, AND GreenIO Paris—that’s 4 conferences normally valued at 999€, for the price of zero: To access the apidays conference component, use this promo link for a Regular Ticket. We review applications daily — keep an eye on your inbox (and spam folder). If your plans change, please release your spot so someone else can join.


Hit Register above. Share your GitHub / LinkedIn / X handle and tell us what you’ll build. Let’s connect LLMs and Graphs — and prototype the next leap in contextual intelligence, powered by Scalingo and Neo4

Turn Your Knowledge into an LLM‑Ready API - Hackathon @ Generation AI

In this second part of my three-part series (catch Part I via episode 182), I dig deeper into the key idea that sales in commercial data products can be accelerated by designing for actual user workflows—vs. going wide with a “many-purpose” AI and analytics solution that “does more,” but is misaligned with how users’ most important work actually gets done.

To explain this, I will explain the concept of user experience (UX) outcomes, and how building your solution to enable these outcomes may be a dependency for you to get sales traction, and for your customer to see the value of your solution. I also share practical steps to improve UX outcomes in commercial data products, from establishing a baseline definition of UX quality to mapping out users’ current workflows (and future ones, when agentic AI changes their job). Finally, I talk about how approaching product development as small “bets” helps you build small, and learn fast so you can accelerate value creation. 

Highlights/ Skip to:

Continuing the journey: designing for users, workflows, and tasks (00:32) How UX impacts sales—not just usage and  adoption(02:16) Understanding how you can leverage users’ frustrations and perceived risks as fuel for building an indispensable data product (04:11)  Definition of a UX outcome (7:30) Establishing a baseline definition of product (UX) quality, so you know how to observe and measure improvement (11:04 ) Spotting friction and solving the right customer problems first (15:34) Collecting actionable user feedback (20:02) Moving users along the scale from frustration to satisfaction to delight (23:04) Unique challenges of designing B2B AI and analytics products used for decision intelligence (25:04)

Quotes from Today’s Episode One of the hardest parts of building anything meaningful, especially in B2B or data-heavy spaces, is pausing long enough to ask what the actual ‘it’ is that we’re trying to solve.

People rush into building the fix, pitching the feature, or drafting the roadmap before they’ve taken even a moment to define what the user keeps tripping over in their day-to-day environment.

And until you slow down and articulate that shared, observable frustration, you’re basically operating on vibes and assumptions instead of behavior and reality.

What you want is not a generic problem statement but an agreed-upon description of the two or three most painful frictions that are obvious to everyone involved, frictions the user experiences visibly and repeatedly in the flow of work.

Once you have that grounding, everything else prioritization, design decisions, sequencing, even organizational alignment suddenly becomes much easier because you’re no longer debating abstractions, you’re working against the same measurable anchor.

And the irony is, the faster you try to skip this step, the longer the project drags on, because every downstream conversation becomes a debate about interpretive language rather than a conversation about a shared, observable experience.

__

Want people to pay for your product? Solve an observable problem—not a vague information or data problem. What do I mean?

“When you’re trying to solve a problem for users, especially in analytical or AI-driven products, one of the biggest traps is relying on interpretive statements instead of observable ones.

Interpretive phrasing like ‘they’re overwhelmed’ or ‘they don’t trust the data’ feels descriptive, but it hides the important question of what, exactly, we can see them doing that signals the problem.

If you can’t film it happening, if you can’t watch the behavior occur in real time, then you don’t actually have a problem definition you can design around.

Observable frustration might be the user jumping between four screens, copying and pasting the same value into different systems, or re-running a query five times because something feels off even though they can’t articulate why.

Those concrete behaviors are what allow teams to converge and say, ‘Yes, that’s the thing, that is the friction we agree must change,’ and that shift from interpretation to observation becomes the foundation for better design, better decision-making, and far less wasted effort.

And once you anchor the conversation in visible behavior, you eliminate so many circular debates and give everyone, from engineering to leadership, a shared starting point that’s grounded in reality instead of theory."

__

One of the reasons that measuring the usability/utility/satisfaction of your product’s UX might seem hard is that you don’t have a baseline definition of how satisfactory (or not) the product is right now. As such, it’s very hard to tell if you’re just making product changes—or you’re making improvements that might make the product worth paying for at all, worth paying more for, or easier to buy.

"It’s surprisingly common for teams to claim they’re improving something when they’ve never taken the time to document what the current state even looks like. If you want to create a meaningful improvement, something a user actually feels, you need to understand the baseline level of friction they tolerate today, not what you imagine that friction might be.

Establishing a baseline is not glamorous work, but it’s the work that prevents you from building changes that make sense on paper but do nothing to the real flow of work. When you diagram the existing workflow, when you map the sequence of steps the user actually takes, the mismatches between your mental model and their lived experience become crystal clear, and the design direction becomes far less ambiguous.

That act of grounding yourself in the current state allows every subsequent decision, prioritizing fixes, determining scope, measuring progress, to be aligned with reality rather than assumptions.

And without that baseline, you risk designing solutions that float in conceptual space, disconnected from the very pains you claim to be addressing."

__

Prototypes are a great way to learn—if you’re actually treating them as a means to learn, and not a product you intend to deliver regardless of the feedback customers give you. 

"People often think prototyping is about validating whether their solution works, but the deeper purpose is to refine the problem itself.

Once you put even a rough prototype in front of someone and watch what they do with it, you discover the edges of the problem more accurately than any conversation or meeting can reveal.

Users will click in surprising places, ignore the part you thought mattered most, or reveal entirely different frictions just by trying to interact with the thing you placed in front of them. That process doesn’t just improve the design, it improves the team’s understanding of which parts of the problem are real and which parts were just guesses.

Prototyping becomes a kind of externalization of assumptions, forcing you to confront whether you’re solving the friction that actually holds back the flow of work or a friction you merely predicted.

And every iteration becomes less about perfecting the interface and more about sharpening the clarity of the underlying problem, which is why the teams that prototype early tend to build faster, with better alignment, and far fewer detours."

__

Most founders and data people tend to measure UX quality by “counting usage” of their solution. Tracking usage stats, analytics on sessions, etc. The problem with this is that it tells you nothing useful about whether people are satisfied (“meets spec”) or delighted (“a product they can’t live without”). These are product metrics—but they don’t reflect how people feel.

There are better measurements to use for evaluating users’ experience that go beyond “willingness to pay.” 

Payment is great, but in B2B products, buyers aren’t always users—and we’ve all bought something based on the promise of what it would do for us, but the promise fell short.

"In B2B analytics and AI products, the biggest challenge isn’t complexity, it’s ambiguity around what outcome the product is actually responsible for changing.

Teams often define success in terms of internal goals like ‘adoption,’ ‘usage,’ or ‘efficiency,’ but those metrics don’t tell you what the user’s experience is supposed to look like once the product is working well.

A product tied to vague business outcomes tends to drift because no one agrees on what the improvement should feel like in the user’s real workflow.

What you want are visible, measurable, user-centric outcomes, outcomes that describe how the user’s behavior or experience will change once the solution is in place, down to the concrete actions they’ll no longer need to take.

When you articulate outcomes at that level, it forces the entire organization to align around a shared target, reduces the scope bloat that normally plagues enterprise products, and gives you a way to evaluate whether you’re actually removing friction rather than just adding more layers of tooling.

And ironically, the clearer the user outcome is, the easier it becomes to achieve the business outcome, because the product is no longer floating in abstraction, it’s anchored in the lived reality of the people who use it."

Links

Listen to part one: Episode 182  Schedule a Design-Eyes Assessment with me and get clarity, now.

AI/ML Analytics
Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design)

This session explores how behavioral design can be effectively delivered through AI-powered platforms to create real-time, personalized journeys at scale.

Join our Behavioral Analyst, Jessica Nicole, as she unpacks how tools like the ecosystem.Ai Prediction Platform enable dynamic experimentation and optimization, using methods such as multi-armed bandits and timely triggers to deliver interactions tailored to individual behavior.

We’ll examine real-world case studies across industries and show how frameworks like EAST and the Make-It Toolkit make it possible to automate proven behavioral mechanisms without needing a full UX team.

Understanding Behavioral Design for AI Systems

End of Year Special: UX Design for Power BI + Building Data Foundations for GenAI!

We’re rounding off the year with a big one 🎉 Join us for a special in-person & hybrid session at Platform (Leeds Train Station) hosted by Hopton Analytics — complete with food, drinks, networking and two fantastic talks that bring together practical Power BI design, data strategy, and GenAI enablement.


Session 1:

Design That Works: How UX Principles Transform Power BI Reports Speaker: Simon Devine, Director at Hopton Analytics Good reporting isn’t just about visuals — it’s about communication. In this session, you’ll learn how UX thinking shapes better Power BI dashboards. We’ll explore:

  • How users actually read and interpret data
  • Practical layout & spacing techniques
  • Effective typography and colour usage
  • How to remove clutter and guide decision-making
  • A real-time report makeover to demonstrate the transformation

If you've ever thought “This report makes sense… but it doesn’t feel right” — this session is for you.


Session 2:

Building Solid Data Foundations for GenAI Speaker: Maryleen Amaizu, Azure Data Platform Consultant AI works best when the data behind it is strong, reliable, and well-structured. Maryleen will walk through how to architect the underlying data environment needed to support Retrieval-Augmented Generation (RAG) for business use-cases. This session will include a demo using:

  • AWS (Hosted GenAI Models)
  • Python (RAG Logic)
  • UI Layer for visualisation

Learn how to design systems that allow GenAI to actually deliver value rather than just buzzwords.


Food + Drinks + Great People \| In Person & Online 🎥 Whether you love geeking out over a semantic model, pushing Power BI design to the next level, or exploring GenAI — this is a must-attend event to round off the year.

RSVP now to secure your in-person spot — space is limited!

End of Year Special: UX Design for PBI & Building Data Foundations for GenAI

Journey to Agentic Report Development

Embark on the evolution from traditional report building to autonomous development! Join Mihaly Kavasi, Group Manager Delivery Lead and Trainer at Avanade, as he guides you through the transformative landscape of AI-powered Power BI development. In this forward-looking session, you'll discover:

How the new PBIR format unlocks unprecedented possibilities for AI-assisted development Progressive techniques from AI code completion to fully agentic development workflows Live demonstrations of Claude Code's agentic capabilities in report creation Real-world implementation strategies and current challenges in autonomous development

Experience live demos showcasing the spectrum of AI assistance—from intelligent code suggestions to autonomous agentic tools that can independently architect and build complex reports. As Power BI embraces code-first methodologies, position yourself at the forefront of this revolution. Learn how agentic development tools are reshaping the entire development lifecycle, dramatically reducing time-to-delivery while elevating report sophistication and maintainability. This isn't just about faster coding—it's about reimagining what's possible when AI becomes your development partner.

Mihaly discovered Power BI 8 years ago and has since become an expert on Power Platform.

His deep understanding of business processes and decision drivers makes him a valuable adviser for customers looking to derive the most value from their assets. Mihaly helps customers define optimal governance structure and implement the right mix of governed self-service BI, as well as advises them on security and performance optimization and managing large scale deployments. He nurtures the next generation of analysts with an emphasis on user needs and UX.

Certified Trainer since 2018, Fast Track Recognized Solution Architect for Power BI since 2021.

Agentic PowerBI Report Development | Mihaly Kavasi

Building B2B analytics and AI tools that people will actually pay for and use is hard. The reality is, your product won’t deliver ROI if no one’s using it. That’s why first principles thinking says you have to solve the usage problem first.

In this episode, I’ll explain why the key to user adoption is designing with the flow of work—building your solution around the natural workflows of your users to minimize the behavior changes you’re asking them to make. When users clearly see the value in your product, it becomes easier to sell and removes many product-related blockers along the way.

We’ll explore how product design impacts sales, the difference between buyers and users in enterprise contexts, and why challenging the “data/AI-first” mindset is essential. I’ll also share practical ways to align features with user needs, reduce friction, and drive long-term adoption and impact.

If you’re ready to move beyond the dashboard and start building products that truly fit the way people work, this episode is for you.

Highlights/Skip to: 

The core argument: why solving for user adoption first helps demonstrate ROI and facilitate sales in B2B analytics and AI products  (1:34) How showing the value to actual end users—not just buyers—makes it easier to sell your product (2:33) Why designing for outcomes instead of outputs (dashboards, etc) leads to better adoption and long-term product value (8:16) How to “see” beyond users’ surface-level feature requests and solutions so you can solve for the actual, unspoken need—leading to an indispensable product (10:23) Reframing feature requests as design-actionable problems (12:07)  Solving for unspoken needs vs. customer-requested features and functions (15:51) Why “disruption” is the wrong approach for product development (21:19)

Quotes: 

“Customers’ tolerance for poorly designed B2B software has decreased significantly over the last decade. People now expect enterprise tools to function as smoothly and intuitively as the consumer apps they use every day. 

Clunky software that slows down workflows is no longer acceptable, regardless of the data it provides. If your product frustrates users or requires extra effort to achieve results, adoption will suffer.

Even the most powerful AI or analytics engine cannot compensate for a confusing or poorly structured interface. Enterprises now demand experiences that are seamless, efficient, and aligned with real workflows. 

This shift means that product design is no longer a secondary consideration; it is critical to commercial success.  Founders and product leaders must prioritize usability, clarity, and delight in every interaction. Software that is difficult to use increases the risk of churn, lengthens sales cycles, and diminishes perceived value. Products must anticipate user needs and deliver solutions that integrate naturally into existing workflows. 

The companies that succeed are the ones that treat user experience as a strategic differentiator. Ignoring this trend creates friction, frustration, and missed opportunities for adoption and revenue growth. Design quality is now inseparable from product value and market competitiveness.  The message is clear: if you want your product to be adopted, retain customers, and win in the market, UX must be central to your strategy.”

“No user really wants to ‘check a dashboard’ or use a feature for its own sake. Dashboards, charts, and tables are outputs, not solutions. What users care about is completing their tasks, solving their problems, and achieving meaningful results. 

Designing around workflows rather than features ensures your product is indispensable. A workflow-first approach maps your solution to the actual tasks users perform in the real world. 

When we understand the jobs users need to accomplish, we can build products that deliver real value and remove friction. Focusing solely on features or data can create bloated products that users ignore or struggle to use. 

Outputs are meaningless if they do not fit into the context of a user’s work. The key is to translate user needs into actionable workflows and design every element to support those flows. 

This approach reduces cognitive load, improves adoption, and ensures the product's ROI is realized. It also allows you to anticipate challenges and design solutions that make workflows smoother, faster, and more efficient. 

By centering design on actual tasks rather than arbitrary metrics, your product becomes a tool users can’t imagine living without. Workflow-focused design directly ties to measurable outcomes for both end users and buyers. It shifts the conversation from features to value, making adoption, satisfaction, and revenue more predictable.”

“Just because a product is built with AI or powerful data capabilities doesn’t mean anyone will adopt it. Long-term value comes from designing solutions that users cannot live without. It’s about creating experiences that take people from frustration to satisfaction to delight. 

Products must fit into users’ natural workflows and improve their performance, efficiency, and outcomes. Buyers' perceived ROI is closely tied to meaningful adoption by end users. If users struggle, churn rises, and financial impact is diminished, regardless of technical sophistication. 

Designing for delight ensures that the product becomes a positive force in the user’s daily work. It strengthens engagement, reduces friction, and builds customer loyalty. 

High-quality UX allows the product to demonstrate value automatically, without constant explanations or hand-holding. Delightful experiences encourage advocacy, referrals, and easier future sales. 

The real power of design lies in aligning technical capabilities with human behavior and workflow. 

When done correctly, this approach transforms a tool into an indispensable part of the user’s job and a demonstrable asset for the business. 

Focusing on usability, satisfaction, and delight creates long-term adoption and retention, which is the ultimate measure of product success.”

“Your product should enter the user’s work stream like a raft on a river, moving in the same direction as their workflow. Users should not have to fight the current or stop their flow to use your tool. 

Introducing friction or requiring users to change their behavior increases risk, even if the product delivers ROI. The more naturally your product aligns with existing workflows, the easier it is to adopt and the more likely it is to be retained. 

Products that feel intuitive and effortless become indispensable, reducing conversations about usability during demos. By matching the flow of work, your solution improves satisfaction, accelerates adoption, and enhances perceived value. 

Disrupting workflows without careful observation can create new problems, frustrate users, and slow down sales. The goal is to move users from frustration to satisfaction to delight, all while achieving the intended outcomes. 

Designing with the flow of work ensures that every feature, interface element, and interaction fits seamlessly into the tasks users already perform. It allows users to focus on value instead of figuring out how to use the product. 

This alignment is key to unlocking adoption, retaining customers, and building long-term loyalty. 

Products that resist the natural workflow may demonstrate ROI on paper but fail in practice due to friction and low engagement. 

Success requires designing a product that supports the user’s journey downstream without interruption or extra effort. 

When you achieve this, adoption becomes easier, sales conversations smoother, and long-term retention higher.”

AI/ML Analytics Dashboard
Brian T. O’Neill – host , Lucas Thelosen – guest @ Gravity

On today's Promoted Episode of Experiencing Data, I’m talking with Lucas Thelosen, CEO of Gravity and creator of Orion, an AI analyst transforming how data teams work. Lucas was head of PS for Looker, and eventually became Head of Product for Google’s Data and AI Cloud prior to starting his own data product company. We dig into how his team built Orion, the challenge of keeping AI accurate and trustworthy when doing analytical work, and how they’re thinking about the balance of human control with automation when their product acts as a force multiplier for human analysts.

In addition to talking about the product, we also talk about how Gravity arrived at specific enough use cases for this technology that a market would be willing to pay for, and how they’re thinking about pricing in today’s more “outcomes-based” environment. 

Incidentally, one thing I didn’t know when I first agreed to consider having Gravity and Lucas on my show was that Lucas has been a long-time proponent of data product management and operating with a product mindset. In this episode, he shares the “ah-hah” moment where things clicked for him around building data products in this manner. Lucas shares how pivotal this moment was for him, and how it helped accelerate his career from Looker to Google and now Gravity.

If you’re leading a data team, you’re a forward-thinking CDO, or you’re interested in commercializing your own analytics/AI product, my chat with Lucas should inspire you!  

Highlights/ Skip to:

Lucas’s breakthrough came when he embraced a data product management mindset (02:43) How Lucas thinks about Gravity as being the instrumentalists in an orchestra, conducted by the user (4:31) Finding product-market fit by solving for a common analytics pain point (8:11) Analytics product and dashboard adoption challenges: why dashboards die and thinking of analytics as changing the business gradually (22:25) What outcome-based pricing means for AI and analytics (32:08) The challenge of defining guardrails and ethics for AI-based analytics products [just in case somebody wants to “fudge the numbers”] (46:03) Lucas’ closing thoughts about what AI is unlocking for analysts and how to position your career for the future  (48:35)

Special Bonus for DPLC Community Members Are you a member of the Data Product Leadership Community? After our chat, I invited Lucas to come give a talk about his journey of moving from “data” to “product” and adopting a producty mindset for analytics and AI work. He was more than happy to oblige. Watch for this in late 2025/early 2026 on our monthly webinar and group discussion calendar.

Note: today’s episode is one of my rare Promoted Episodes. Please help support the show by visiting Gravity’s links below:

Quotes from Today’s Episode “The whole point of data and analytics is to help the business evolve. When your reports make people ask new questions, that’s a win. If the conversations today sound different than they did three months ago, it means you’ve done your job, you’ve helped move the business forward.” — Lucas 

“Accuracy is everything. The moment you lose trust, the business, the use case, it's all over. Earning that trust back takes a long time, so we made accuracy our number one design pillar from day one.” — Lucas 

“Language models have changed the game in terms of scale. Suddenly, we’re facing all these new kinds of problems, not just in AI, but in the old-school software sense too. Things like privacy, scalability, and figuring out who’s responsible.” — Brian

“Most people building analytics products have never been analysts, and that’s a huge disadvantage. If data doesn’t drive action, you’ve missed the mark. That’s why so many dashboards die quickly.” — Lucas

“Re: collecting feedback so you know if your UX is good: I generally agree that qualitative feedback is the best place to start, not analytics [on your analytics!] Especially in UX, analytics measure usage aspects of the product, not the subject human experience. Experience is a collection of feelings and perceptions about how something went.” — Brian

Links

Gravity: https://www.bygravity.com LinkedIn: https://www.linkedin.com/in/thelosen/ Email Lucas and team: [email protected]

AI/ML Analytics Cloud Computing Dashboard Looker
AI Agent Meetup NYC 2025-09-30 · 21:30

AI Agent Meetup NYC: Deep Research for Finance, Education AI & the Need for Speed

This AI Alliance event is sponsored by SingleStore.

​The NYC AI Agent Meetup is a place for AI Agent Developers, Engineers, UX, Ops, and Applied Researchers exploring and leading the evolution of AI agents. Whether you're building autonomous systems, experimenting with LLM-powered assistants, or integrating AI agents into real-world applications, this meetup is the place to discover, share insights, and collaborate with your peers. Join us for talks, demos, discussions and networking with like-minded innovators shaping the next generation of AI. ​ The theme is Agentic AI including Deep Research for Finance, Education AI, and Speed

Details and RSVP: https://luma.com/a2jxf2ck

AI Agent Meetup NYC
AI Agent Meetup NYC 2025-09-30 · 21:30

AI Agent Meetup NYC: Deep Research for Finance, Education AI & the Need for Speed

This AI Alliance event is sponsored by SingleStore.

​The NYC AI Agent Meetup is a place for AI Agent Developers, Engineers, UX, Ops, and Applied Researchers exploring and leading the evolution of AI agents. Whether you're building autonomous systems, experimenting with LLM-powered assistants, or integrating AI agents into real-world applications, this meetup is the place to discover, share insights, and collaborate with your peers. Join us for talks, demos, discussions and networking with like-minded innovators shaping the next generation of AI. ​ The theme is Agentic AI including Deep Research for Finance, Education AI, and Speed

Details and RSVP: https://luma.com/a2jxf2ck

AI Agent Meetup NYC
Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design)

​This AI Alliance event is sponsored by Ekimetrics & IBM.

The Paris AI Agent Meetup is a community for AI Agent Developers, Engineers, UX, Ops, and Applied Researchers exploring and leading the evolution of AI agents. Whether you're building autonomous systems, experimenting with LLM-powered assistants, or integrating AI agents into real-world applications, this meetup is the place to discover, share insights, and collaborate with your peers. Join us for talks, demos, discussions and networking with like-minded innovators shaping the next generation of AI.

Details and RSVP: https://luma.com/7hy1tn7j

[AI Alliance] AI Agent Meetup Paris #2 – Open Data for Open Models and Agents
Blazor Day 2025
Blazor Day 2025