talk-data.com talk-data.com

S

Speaker

Shelbee Eigenbrode

2

talks

Gen AI/ML Specialist Solutions Architects & Data Scientists Leader Amazon Web Services

Filter by Event / Source

Talks & appearances

2 activities · Newest first

Search activities →
AWS re:Invent 2024 - Accelerate production for gen AI using Amazon SageMaker MLOps & FMOps (AIM354)

Amazon SageMaker provides purpose-built tools to create a reliable path to production for both machine learning and generative AI workflows. SageMaker MLOps helps you automate and standardize processes across generative AI and ML lifecycles. Using SageMaker, you can train, test, troubleshoot, deploy, and govern models at scale to boost your productivity while maintaining model performance in production. Explore the latest and greatest capabilities such as SageMaker Experiments with MLflow, SageMaker Pipelines, and SageMaker Model Registry supporting efficiencies in your ML workflow (MLOps) and generative AI workflows (FMOps). Learn how to bring generative AI concept to production quickly and securely.

Learn more: AWS re:Invent: https://go.aws/reinvent. More AWS events: https://go.aws/3kss9CP

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

About AWS: Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

AWSreInvent #AWSreInvent2024

Generative AI on AWS

Companies today are moving rapidly to integrate generative AI into their products and services. But there's a great deal of hype (and misunderstanding) about the impact and promise of this technology. With this book, Chris Fregly, Antje Barth, and Shelbee Eigenbrode from AWS help CTOs, ML practitioners, application developers, business analysts, data engineers, and data scientists find practical ways to use this exciting new technology. You'll learn the generative AI project life cycle including use case definition, model selection, model fine-tuning, retrieval-augmented generation, reinforcement learning from human feedback, and model quantization, optimization, and deployment. And you'll explore different types of models including large language models (LLMs) and multimodal models such as Stable Diffusion for generating images and Flamingo/IDEFICS for answering questions about images. Apply generative AI to your business use cases Determine which generative AI models are best suited to your task Perform prompt engineering and in-context learning Fine-tune generative AI models on your datasets with low-rank adaptation (LoRA) Align generative AI models to human values with reinforcement learning from human feedback (RLHF) Augment your model with retrieval-augmented generation (RAG) Explore libraries such as LangChain and ReAct to develop agents and actions Build generative AI applications with Amazon Bedrock