Hands-on exercise to build, fine-tune, and apply the Lag-Llama transformer to time-series data.
talk-data.com
Topic
fine-tuning
13
tagged
Activity Trend
Top Events
Finetuning and Inference
Finetuning and inference workflows for geospatial foundation models using TerraTorch.
Stay tuned for the talk abstract!
Overview of Small Language Models (SLMs) in solving business problems and improving device capabilities. Compares SLM with Large Language Models (LLMs) and explains why SLM may be better suited for an organization's needs, with emphasis on customization via fine-tuning. Covers deploying models on edge devices for real-time processing and decision-making, plus a roadmap from cloud development to cross-platform deployment, with Azure for seamless integration and IP ownership. Concludes with how Microsoft's Student Ambassador Program can empower students to bridge academia and industry and foster innovation.
This project aims to leverage retrieval augmented generation (RAG) and fine-tuning of LLMs to create an AI-based assistant for mental health, which could be used to support a psychotherapist.
In this session, we will focus on fine-tuning, continuous pretraining, and retrieval-augmented generation (RAG) to customize foundation models using Amazon Bedrock. Attendees will explore and compare strategies such as prompt engineering, which reformulates tasks into natural language prompts, and fine-tuning, which involves updating the model's parameters based on new tasks and use cases. The session will also highlight the trade-offs between usability and resource requirements for each approach. Participants will gain insights into leveraging the full potential of large models and learn about future advancements aimed at enhancing their adaptability.
Julian and David will cover the Hackathon project they worked on that won at the New York Stock Exchange— fine tuning an LLM to generate summaries for airflow task failures.
ChatGPT is awesome, but developing with its API comes at a cost. Fortunately, there are open-source alternatives like Google Gemini, Streamlit, and Python APIs that can fetch prompt results using an API key. In this presentation, I'll explore how to create a lightweight, self-service end-to-end LLMs application using prompt engineering and fine-tuning based on user requests. Additionally, I'll demonstrate how to build a food suggestion application based on ingredients or food names.
In this episode, we are partnering with the AI Advisory team in Microsoft for Startups to explore a unique use case by a leading startup in the program: OneAI. OneAI unique approach to Enterprise AI is to curate and fine-tune the world's top AI capabilities and package them as APIs, empowering businesses to deploy tailored AI solutions in days. During this episode, we will explore how their teams in building AI solutions are designed to ensure consistent, predictable output, and alignment with the source documents, bolstering trust and enhancing business outcomes. We will also, share some product demos around building their AI Agent, optimizing both the tuning process and long-term performance in terms of cost, speed, and carbon footprint, all while emphasizing transparency and explainability.
Learn how to tune models on your own treasure trove of data.
Learn about fine tuning, prompt tuning, guardrails, and middleware to make LLMs more consistent and reliable.
Foundations of LLMs and Python Basics; Understanding Natural Language Processing; Transformers and Attention; LLM Development: Fine-tuning and Prompt Engineering; Retrieval-Augmented Generation (RAG); Introduction to LLM Agents; Advanced Topics for Production LLM Application