This talk explores how foundation models, originally developed for unstructured data such as text and images, are now enabling in-context learning on structured relational data. We will examine how recent developments allow these models to generalize across diverse tabular prediction tasks without retraining, by leveraging schema-aware representations and attention mechanisms over multi-table structures. The session will highlight emerging research directions at the intersection of deep learning, graph-based transformer architectures, and multi-modal relational datasets. Throughout the presentation, we will learn how these recent innovations allow an expert practitioner to reduce the time to prediction from months to seconds by introducing predictive models that operate directly on the raw database.
talk-data.com
Topic
in-context learning
2
tagged
Activity Trend
Organizations develop feedback loops to continuously enhance quality. One such loop is the learning from user interactions with your data, retraining models, deploying new models and learning again. The learning curve to create a loop like this is steep, it requires ML experience and tools. However, most teams can easily provide labeled examples. In-Context Learning (ICL) is a method to add classification examples as input to foundation models (like LLMs).\nThis talk defines an Adaptive ICL strategy using Retrieval for Examples, where the output is used for content retrieval, example set expansion for future model training and real-time user behaviour tracking. Adaptive ICL is hence an easy way for teams to get immediate results with AI, while laying the foundation for more advanced ML loops in the future.