This talk explores how foundation models, originally developed for unstructured data such as text and images, are now enabling in-context learning on structured relational data. We will examine how recent developments allow these models to generalize across diverse tabular prediction tasks without retraining, by leveraging schema-aware representations and attention mechanisms over multi-table structures. The session will highlight emerging research directions at the intersection of deep learning, graph-based transformer architectures, and multi-modal relational datasets. Throughout the presentation, we will learn how these recent innovations allow an expert practitioner to reduce the time to prediction from months to seconds by introducing predictive models that operate directly on the raw database.
talk-data.com
V
Speaker
Vid Kocijan
1
talks
Applied ML Engineer
Kumo.AI
Vid is a Applied ML Engineer on the Research team at Kumo.ai, where he develops AI-powered products. His work spans foundation models, language processing, user-centric AI systems, and Kumo’s predictive querying language which ultimately enable novel predictive AI solutions for relational data. Prior to joining Kumo, Vid earned his PhD from the University of Oxford, where he focused on improving the pre-training of neural network models on unstructured data.
Bio from: AI-Powered Data & Search: Unlocking Intelligence Across Systems
Filter by Event / Source
Talks & appearances
1 activities · Newest first