talk-data.com
Activities tracked
4
Main-stage program for Small Data SF featuring 15 talks, a fireside chat, and a closing panel on data minimalism.
Sessions & talks
Showing 1–4 of 4 · Newest first
Directed Acyclic Graphs (DAGs) are the foundation of most orchestration frameworks. But what happens when you allow an LLM to act as the router? Acyclic graphs now become cyclic, which means you have to design for the challenges resulting from all this extra power. We'll cover the ins and outs of agentic applications and how to best use them in your work as a data practitioner or developer building today.
➡️ Follow Us LinkedIn: https://www.linkedin.com/company/small-data-sf/ X/Twitter : https://twitter.com/smalldatasf Website: https://www.smalldatasf.com/
Discover LangChain, the open-source framework for building powerful agentic systems. Learn how to augment LLMs with your private data, moving beyond their training cutoffs. We'll break down how LangChain uses "chains," which are essentially Directed Acyclic Graphs (DAGs) similar to data pipelines you might recognize from dbt. This structure is perfect for common patterns like Retrieval Augmented Generation (RAG), where you orchestrate steps to fetch context from a vector database and feed it to an LLM to generate an informed response, much like preparing data for analysis.
Dive into the world of AI agents, where the LLM itself determines the application's control flow. Unlike a predefined DAG, this allows for dynamic, cyclic graphs where an agent can iterate and improve its response based on previous attempts. We'll explore the core challenges in building reliable agents: effective planning and reflection, managing shared memory across multiple agents in a cognitive architecture, and ensuring reliability against task ambiguity. Understand the critical trade-offs between the dependability of static chains and the flexibility of dynamic LLM agents.
Introducing LangGraph, a framework designed to solve the agent reliability problem by balancing agent control with agency. Through a live demo in LangGraph Studio, see how to build complex AI applications using a cyclic graph. We'll demonstrate how a router agent can delegate tasks, execute a research plan with multiple steps, and use cycles to iterate on a problem. You'll also see how human-in-the-loop intervention can steer the agent for improved performance, a critical feature for building robust and observable agentic systems.
Explore some of the most exciting AI agents in production today. See how Roblox uses an AI assistant to generate virtual worlds from a prompt, how TripAdvisor’s agent acts as a personal travel concierge to create custom itineraries, and how Replit’s coding agent automates code generation and pull requests. These real-world examples showcase the practical power of moving from simple DAGs to dynamic, cyclic graphs for solving complex, agentic problems.
Build Bigger With Small Ai: Running Small Models Locally
It's finally possible to bring the awesome power of Large Language Models (LLMs) to your laptop. This talk will explore how to run and leverage small, openly available LLMs to power common tasks involving data, including selecting the right models, practical use cases for running small models, and best practices for deploying small models effectively alongside databases.
Bio: Jeffrey Morgan is the founder of Ollama, an open-source tool to get up and run large language models. Prior to founding Ollama, Jeffrey founded Kitematic, which was acquired by Docker and evolved into Docker Desktop. He has previously worked at companies including Docker, Twitter, and Google.
➡️ Follow Us LinkedIn: https://www.linkedin.com/company/small-data-sf/ X/Twitter : https://twitter.com/smalldatasf Website: https://www.smalldatasf.com/
Discover how to run large language models (LLMs) locally using Ollama, the easiest way to get started with small AI models on your Mac, Windows, or Linux machine. Unlike massive cloud-based systems, small open source models are only a few gigabytes, allowing them to run incredibly fast on consumer hardware without network latency. This video explains why these local LLMs are not just scaled-down versions of larger models but powerful tools for developers, offering significant advantages in speed, data privacy, and cost-effectiveness by eliminating hidden cloud provider fees and risks.
Learn the most common use case for small models: combining them with your existing factual data to prevent hallucinations. We dive into retrieval augmented generation (RAG), a powerful technique where you augment a model's prompt with information from a local data source. See a practical demo of how to build a vector store from simple text files and connect it to a model like Gemma 2B, enabling you to query your own data using natural language for fast, accurate, and context-aware responses.
Explore the next frontier of local AI with small agents and tool calling, a new feature that empowers models to interact with external tools. This guide demonstrates how an LLM can autonomously decide to query a DuckDB database, write the correct SQL, and use the retrieved data to answer your questions. This advanced tutorial shows you how to connect small models directly to your data engineering workflows, moving beyond simple chat to create intelligent, data-driven applications.
Get started with practical applications for small models today, from building internal help desks to streamlining engineering tasks like code review. This video highlights how small and large models can work together effectively and shows that open source models are rapidly catching up to their cloud-scale counterparts. It's never been a better time for developers and data analysts to harness the power of local AI.
Where Data Science Meets Shrek: How BuzzFeed uses AI
By introducing a range of AI-enhanced products that amplify creativity and interactivity across our platforms, Buzzfeed has been able to connect with the largest global audience of young people online to cement its role as the defining digital media company of the AI era. Notably, some of Buzzfeed's most successful tools and content experiences thrive on the power of small, focused datasets. Still wondering how Shrek fits into the picture? You'll have to watch!
Video from: https://smalldatasf.com/
📓 Resources Big Data is Dead: https://motherduck.com/blog/big-data-... Small Data Manifesto: https://motherduck.com/blog/small-dat... Why Small Data?: https://benn.substack.com/p/is-excel-... Small Data SF: https://www.smalldatasf.com/
➡️ Follow Us
LinkedIn: / motherduck
X/Twitter : / motherduck
Bluesky: motherduck.com
Blog: https://motherduck.com/blog/
Discover how BuzzFeed's Data team, led by Gilad Cohen, harnesses AI for creative purposes, leveraging large language models (LLMs) and generative image capabilities to enhance content creation. This video explores how machine learning teams build tools to create new interactive media experiences, focusing on augmenting creative workflows rather than replacing jobs, allowing readers to participate more deeply in the content they consume.
We dive into the core data science problem of understanding what a piece of content is about, a crucial step for improving content recommendation systems. Learn why traditional methods fall short and how the team is constantly seeking smaller, faster, and more performant models. This exploration covers the evolution from earlier architectures like DistilBERT to modern, more efficient approaches for better content representation, clustering, and user personalization.
A key technique explored is the use of text embeddings, which are dense, low-dimensional vector representations of data. This video provides an accessible explanation of embeddings as a form of compressed knowledge, showing how BuzzFeed creates a unique vector for each article. This allows for simple vector math to find semantically similar content, forming a foundational infrastructure for powerful ranking and recommender systems.
Explore how BuzzFeed leverages generative image capabilities to create new interactive formats. The journey began with Midjourney experiments and evolved to building custom tools by fine-tuning a Stable Diffusion XL model using LORA (Low-Rank Approximation). This advanced technique provides greater control over image output, enabling the rapid creation of viral AI generators that respond to trending topics and allow for massive user engagement.
Finally, see a practical application of machine learning for content optimization. BuzzFeed uses its vast historical dataset from Bayesian A/B testing to train a model that predicts headline performance. By generating multiple headline candidates with an LLM like Claude and running them through this predictive model, they can identify the winning headline. This showcases how to use unique, in-house data to build powerful tools that improve click-through rates and drive engagement, pointing to a significant transformation in how media is created and consumed.