In this session, we will focus on fine-tuning, continuous pretraining, and retrieval-augmented generation (RAG) to customize foundation models using Amazon Bedrock. Attendees will explore and compare strategies such as prompt engineering, which reformulates tasks into natural language prompts, and fine-tuning, which involves updating the model's parameters based on new tasks and use cases. The session will also highlight the trade-offs between usability and resource requirements for each approach. Participants will gain insights into leveraging the full potential of large models and learn about future advancements aimed at enhancing their adaptability.
talk-data.com
Topic
fine-tuning
1
tagged
Activity Trend
2
peak/qtr
2020-Q1
2026-Q1
Top Events
Virtual Summit: LLMs and the Generative AI Revolution
2
A Weekend with Ernie Chan in London: Trading with GenAI
1
[AI Alliance] Hyper Parameter Optimization for Computer Vision using TerraTorch
1
NYC Airflow Rooftop Happy Hour ft. PMC Member Jarek Potiuk!
1
Data Science Retreat Demo Day #39
1
Feb 20 - Virtual AI, ML and Computer Vision Meetup
1
AI Meetup (September): AI, GenAI and LLMs
1
Tiny But Mighty: Unleashing the Power of Small Language Models
1
AI Meetup (April): Generative AI, LLMs and ML
1
Understanding and Practicing with LLMs
1
[AI Alliance] Hyper Parameter Optimization for Computer Vision using TerraTorch
1
Turn your content into AI Agent with OneAI
1
Filtering by:
AI Meetup (September): AI, GenAI and LLMs
×