talk-data.com talk-data.com

Topic

amazon bedrock

18

tagged

Activity Trend

1 peak/qtr
2020-Q1 2026-Q1

Activities

18 activities · Newest first

The insurance industry has billions of historical documents; with hundreds of thousands more being generated every day. The documents, in varying formats are used both internally and with other insurers to agree terms, assess risk and create accurate quotes. Historically each document can take a few hours or even days to manually process and needs to be loaded into each company’s systems. AI is helping several companies to reduce this processing time to minutes by automating the process using Intelligent Document Processing (IDP), saving time, increasing the accuracy, and readying the data for further analysis - giving valuable insights back to the business. IDP uses the latest AI services; Amazon’s Bedrock, Textract to extract text, Amazon Comprehend to classify and detect entries in the documents and custom models with labelled grown truth in SageMaker. This architecture will look at these services alongside the pre-processing, processing, and post processing challenges and showcase how to jointly leverage these services for the best success. Using IDP, one customer achieved an accuracy of 90%+ and a more than 500x reduction in processing time across over £500 million worth of business

This demonstration showcases how conversational AI can create tangible objects through an innovative AWS-powered workflow. Using Amazon Bedrock, Nova Pro, and SageMaker, the system transforms user conversations into personalized 3D-printed keychains. The live demonstration illustrates how businesses can combine generative AI and manufacturing to create unique, personalized customer experiences.

AI agents are a new class of software applications that use AI models to reason, plan, act, learn, and adapt in pursuit of user-defined goals with limited human oversight. Building AI agents that can reliably perform complex tasks has become increasingly accessible thanks to open source frameworks like Strands Agents. However, moving from a promising proof-of-concept to a production-ready agent that can scale to thousands of users presents significant challenges.

Through hands-on demos, we'll build a system from scratch and progressively deploy it to production using the comprehensive enterprise-grade services provided by AgentCore. You'll learn to implement key production capabilities, including secure session isolation, persistent memory, identity management, and real-time observability. The learnings can be applied to any framework and model, hosted on Amazon Bedrock or elsewhere.

Demonstration of building an agentic AI application to support financial analysts with a conversational AI assistant, including architectural components (Anthropic Claude 3.5 Sonnet, Amazon Bedrock, Elasticsearch Vector Database, Elasticsearch MCP Server) and capabilities such as pattern identification, linking news sentiment to portfolio performance, and real-time natural language data engagement.

Scaling Agentic AI with Claude, MCP, and Vectors. We'll focus on a financial services Agentic AI case study that empowers analysts with a conversational AI assistant built using Anthropic Claude 3.5 Sonnet on Amazon Bedrock. Elasticsearch vector database. Elasticsearch MCP (Model Context Protocol) Server. This assistant transforms complex workflows—like assessing the impact of market news on thousands of customer portfolios—into an intuitive, natural language dialogue. We'll demonstrate how to build and deploy AI Agents that help: Rapidly identify patterns in complex financial data; Build meaningful correlations, such as linking news sentiment to portfolio performance; Engage with your data in real-time, using natural language. We'll also highlight how MCP servers can integrate additional services, such as weather data and email notifications, demonstrating the power of search and generative AI.

In this hands-on workshop, you will build an enterprise AI assistant using Amazon Bedrock and Informatica's no-code/low-code AI Agent Framework. You will also learn how to use pre-built jumpstart recipes to speed up AI solution development. Additionally, you will explore Informatica's Generative AI blueprint, enabling businesses to bring semantic intelligence from Informatica Intelligent Data Management Cloud (IDMC) and integrate trusted, high-quality data from different sources (including Informatica's Business 360 applications) to create business-aware enterprise solutions.

In this hands-on workshop, you will build an enterprise AI assistant using Amazon Bedrock and Informatica’s no-code/low-code AI Agent Framework. You will also learn how to use pre-built jumpstart recipes to speed up AI solution development. Additionally, you will explore Informatica’s overall Generative AI blueprint, enables businesses to bring semantic intelligence from Informatica Intelligent Data Management Cloud (IDMC) and integrate trusted, high-quality data from different sources (incl. Informatica’s Business 360 applications) to create business-aware enterprise solutions.

Training an AI-powered Slackbot sounds straightforward - until your model starts ignoring half of the data you feed it. At AWS User Group Vienna, we built OTTO, a Slack-integrated AI assistant, fine-tuned using the open source tool InstructLab and deployed on Amazon Bedrock. But as we scaled up, we ran into real-world bottlenecks: training on MacBooks was slow, retrieval was inconsistent, and debugging was way harder than expected. This talk goes beyond the ‘perfect AI stack’ and into the messy reality of model tuning, infrastructure choices, and the unexpected lessons we learned. If you’re working on AI-powered assistants (or just curious how fast things can go sideways), this talk will provide practical insights into our approach, with focus on cost efficiency, and what we learned on the way.

In this session, you will explore how to build an AI Assistant leveraging Knowledge Bases with Amazon Bedrock. We will also showcase a live demo of an 'Insurance Policy AI Assistant,' providing insights into its real-world applications. Additionally, we will guide you through the architecture of the demo, offering a comprehensive understanding of the underlying technology and its potential.

In this session, we will focus on fine-tuning, continuous pretraining, and retrieval-augmented generation (RAG) to customize foundation models using Amazon Bedrock. Attendees will explore and compare strategies such as prompt engineering, which reformulates tasks into natural language prompts, and fine-tuning, which involves updating the model's parameters based on new tasks and use cases. The session will also highlight the trade-offs between usability and resource requirements for each approach. Participants will gain insights into leveraging the full potential of large models and learn about future advancements aimed at enhancing their adaptability.