This talk explores how foundation models, originally developed for unstructured data such as text and images, are now enabling in-context learning on structured relational data. We will examine how recent developments allow these models to generalize across diverse tabular prediction tasks without retraining, by leveraging schema-aware representations and attention mechanisms over multi-table structures. The session will highlight emerging research directions at the intersection of deep learning, graph-based transformer architectures, and multi-modal relational datasets. Throughout the presentation, we will learn how these recent innovations allow an expert practitioner to reduce the time to prediction from months to seconds by introducing predictive models that operate directly on the raw database.
talk-data.com
Topic
in-context learning
1
tagged
Activity Trend
1
peak/qtr
2020-Q1
2026-Q1
Filtering by:
Matthias Fey
×