talk-data.com talk-data.com

Topic

foundation models

7

tagged

Activity Trend

1 peak/qtr
2020-Q1 2026-Q1

Activities

7 activities · Newest first

This talk explores how foundation models, originally developed for unstructured data such as text and images, are now enabling in-context learning on structured relational data. We will examine how recent developments allow these models to generalize across diverse tabular prediction tasks without retraining, by leveraging schema-aware representations and attention mechanisms over multi-table structures. The session will highlight emerging research directions at the intersection of deep learning, graph-based transformer architectures, and multi-modal relational datasets. Throughout the presentation, we will learn how these recent innovations allow an expert practitioner to reduce the time to prediction from months to seconds by introducing predictive models that operate directly on the raw database.

Abstract: The talk introduces Any Compression via Iterative Pruning (ACIP), a novel approach designed to give users intuitive control over the compression-performance trade-off. ACIP uses a single gradient descent run of iterative pruning to establish a global parameter ranking, enabling immediate materialization of models of any target size. It demonstrates strong predictive performance on downstream tasks without costly fine-tuning and achieves state-of-the-art compression for open-weight LLMs, often complementing common quantization techniques.

First, we explore the core concepts of foundation models, such as pretraining, transfer learning and fine-tuning. Second, we take a look at the advantages and disadvantages of foundation models in time series forecasting. While they can speed up the modeling and inference process, they might also not be the best solution for a particular project, meaning that we must still have a certain expertise to use them correctly and compare them with other methods. Then, we explore some of the major contributions to the field, including TimeGPT, Chronos, Moirai and TimesFM. We quickly discover their architectures, their capabilities and their limitations. Finally, we see TimeGPT in action to demonstrate how a foundation model can be used and how it compares to traditional methods.

In this session you will discover how to create, and fine-tune prompts for a diverse range of AI models on AWS, leveraging fundamental principles of prompt engineering. Whether you're a seasoned AI enthusiast or just stepping into the realm of artificial intelligence, this session promises to equip you with fundamental concepts and practical techniques to enhance your prompt engineering skills. What you will learn: Foundation models and large language models; Key concepts of prompt engineering; Basic prompt techniques; Zero-shot prompting; Few-shot prompting; Chain-of-thought prompting.

In this session, we will focus on fine-tuning, continuous pretraining, and retrieval-augmented generation (RAG) to customize foundation models using Amazon Bedrock. Attendees will explore and compare strategies such as prompt engineering, which reformulates tasks into natural language prompts, and fine-tuning, which involves updating the model's parameters based on new tasks and use cases. The session will also highlight the trade-offs between usability and resource requirements for each approach. Participants will gain insights into leveraging the full potential of large models and learn about future advancements aimed at enhancing their adaptability.