talk-data.com talk-data.com

Event

Big Data LDN 2024

2024-09-18 – 2024-09-19 Big Data LDN/Paris

Activities tracked

16

Filtering by: LLM ×

Sessions & talks

Showing 1–16 of 16 · Newest first

Search within this event →

Building With Gemini on Google Cloud – an Overview of Architecture, Capabilities and Usage

2024-09-19
Face To Face

This session explores Gemini's capabilities, architecture, and performance benchmarks. We'll delve into the significance of its extensive context window and address the critical aspects of safety, security, and responsible AI use. Hallucination, a common concern in LLM applications, remains a focal point of ongoing development. This talk will highlight recent advancements aimed at mitigating the risk of hallucination to enhance LLMs utility across various applications.

Fine-Tuning with IBM InstructLab and the Future of Enterprise LLMs

2024-09-19
Face To Face

Learn about IBM InstructLab, which streamlines the fine-tuning of AI models through knowledge distillation. Discover how this cutting-edge technology can transform your AI projects and make them more efficient and effective.

In addition, we’ll delve into the latest trends in Large Language Models (LLMs), highlighting the benefits of enterprise-ready models such as IBM Granite. We’ll discuss key considerations such as model size, purpose, and the debate between open-sourced and closed models.

How To Build an Enterprise AI Applications With Multi Agent Rag Systems (MARS)

2024-09-19
Face To Face

In the rapidly evolving world of enterprise AI, traditional monolithic approaches are giving way to more agile and efficient architectures. This session will delve into how Multi-Agent Retrieval-Augmented Generation Systems (MARS) are transforming enterprise software development for AI applications. Learn about the core components of AI agents, the challenges of integrating LLMs with enterprise data, and how to build scalable, accurate, and high-performing AI applications

Improving Large Language Models: How to Make Sure Yours is Production-Ready with Full Data Governance

2024-09-19
Face To Face

AI is changing our work and personal lives, offering unprecedented opportunities in almost every arena. However, many organizations risk undermining their AI-driven projects by neglecting the need to unify, protect, and improve their data from the outset. Join this session to see first-hand examples of how feeding different data sets into a custom Large Language Model (LLM) can impact outcomes and learn how to build your foundation of high-quality, fully governed data today.

Harnessing LLMs and Speech Recognition for Personalised Recommendation Engines

2024-09-19
Face To Face
LLM

This session will explore how Large Language Models (LLMs), combined with speech recognition technology, can be combined to create highly personalised and efficient recommendation engines. Attendees will gain practical insights, experience a live demonstration, and see examples of how these technologies can enhance customer experience and operational efficiency.

Attendees will learn:

• The fundamentals of Large Language Models (LLMs) and their broad applications.

• How to build personalised recommendation engines using LLMs.

• Strategies for integrating LLMs and speech recognition.

• Insights into the benefits and challenges of using LLMs and speech capabilities for personalised recommendations.

• A live demonstration of creating a personalised recommendation engine, including interactive speech features.

Building Hyper-Personalized LLM Applications with Rich Contextual Data

2024-09-19
Face To Face

In the era of AI-driven applications, personalization is paramount. This talk explores the concept of Full RAG (Retrieval-Augmented Generation) and its potential to revolutionize user experiences across industries. We examine four levels of context personalization, from basic recommendations to highly tailored, real-time interactions.

The presentation demonstrates how increasing levels of context - from batch data to streaming and real-time inputs - can dramatically improve AI model outputs. We discuss the challenges of implementing sophisticated context personalization, including data engineering complexities and the need for efficient, scalable solutions.

Introducing the concept of a Context Platform, we showcase how tools like Tecton can simplify the process of building, deploying, and managing personalized context at scale. Through practical examples in travel recommendations, we illustrate how developers can easily create and integrate batch, streaming, and real-time context using simple Python code, enabling more engaging and valuable AI-powered experiences.

Is Your Data Office Ready To Mitigate Data Risk?

2024-09-19
Face To Face

In the last decade data has served as a guide to learn from the past, make decisions in the present and the drive insights for the future. The Art of possible that ChatGPT demonstrated in 2023 Channeled investments towards improving data capabilities. Peer competition, emergence of challenger organisations, advance analytics has increased customer expectation and exerted increased pressure on data analysis and exploration . 

These increased expectations has translated into new way of working with data and has demanded teams to be more data driven. This has resulted in emergence of data risk. No matter the expectation there is always a boundary on what data can deliver and cannot deliver. This boundary is directly related to the original intent of data collection and organisational data policies, risk policies and risk appetite. As all part of the organisation touch data and it has become increasingly challenging to mitigate data risks. Acknowledging this major Banks have elevated data risk to Principle risk. This has allowed data office to have more control on how data is being used and accessed within an organisation and most importantly embed business accountability for data as required by most regulations such as BCBS239, GDRP expect. 

In this 30 minutes we will explore 

  • What is Data Risk? 
  • How to identify Data Risk and design Data Risk Taxonomy? 
  • Who are the key stakeholders within an organisation responsible to mitigating data risk? 
  • How to design risk appetite for Data Risk? 
  • Explore how key data risk controls should look like?

The Key to Unlocking RO(Gen)AI

2024-09-19
Face To Face

While Generative AI has dominated technological discussions since the release of ChatGPT, it represents just a fraction of the broader AI landscape. Many organizations are still struggling to harness its potential. In this session, we’ll explore the key challenges that successful companies have overcome in their AI journeys and highlight the major opportunities for leveraging the full spectrum of data and AI technologies.

The Reality of Building a Modern AI Data Stack

2024-09-18
Face To Face

Data practitioners are feeling pressure around the realities and real-life considerations of building out a data stack that can handle the next generation of data problems in addition to today's data challenges. Considerations like minimizing complexity and cost while focusing on scalability and performance are at the forefront of the data world right now, and how this works in a world where LLMs and deep learning are becoming table stakes is paramount. There are questions about data management at this scale, as well as how to fold in legacy infrastructure and architectures. We'll discuss the modern AI data stack in this talk, delving into the realities of building the data ecosystem of the future.

RAG to Riches: Improving RAG with a Logical Data Fabric

2024-09-18
Face To Face

Generative AI (GenAI) has garnered significant attention for its potential to revolutionize various industries, from creative arts to data analysis. However, organizations are realizing that implementing GenAI is not as easy as just asking ChatGPT a few questions. Providing the most relevant and accurate contextual data to the LLM is critical if organizations are going to realize the full benefits of GenAI. Retrieval Augmented Generation, or RAG, is a well understood and effective technique for augmenting the original user prompt with additional, contextual data. However, many examples of RAG grossly oversimplify the reality of enterprise data ecosystems. In this session, we will examine how a Logical Data Fabric can make RAG a practical reality in large, complex organizations and deliver AI-ready data that make RAG effective and accurate.

Delivering on the Promise of AI: Increasing Accuracy With a Semantic Layer

2024-09-18
Face To Face

Most organizations are using GenAI in hopes of gaining easy access to information needed by their users to enable greater productivity. At the same time, it's also well-documented that LLMs can deliver inaccurate information. To be of value, users need to be able to trust that the answers presented to them are correct.

This is a key issue at the center of AI adoption and its applications in the real world. For example, many organizations are beginning to develop, test, and implement chatbots for internal and external use to provide answers to questions by using natural language. When those chatbots do not produce the right answers, all the time and effort put into creating them ends up wasted.

Join David Jayatillake, Cube's VP of AI, for an in-depth discussion on the current state of GenAI and the rise of the semantic layer.

In this talk, you will learn about:

The current state of GenAI

The rise of the semantic layer in modern data stack with AI

The significant differences between an AI chatbot with and without a semantic layer 

Gen-AI to Gen-BI: Having a Sophisticated Dialogue With Your Data

2024-09-18
Face To Face

In this session, we will demo and discuss the four central pillars of an enterprise strategy to realize true ""Gen-BI"" - the infusion of Gen-AI and LLMs into your business and decision intelligence capabilities.

        • Direct operations on any data source, accessible to any user 

        • Sophisticated request handling through the simplicity of conversational speech 

        • The 'Multi-LLM' strategy - to bring the right model for the right data set

        • Security so you can tap into Gen-AI without concern

Streaming Text Embeddings for Retrieval Augmented Generation (RAG)

2024-09-18
Face To Face

A 30 minute demo of how to use Redpanda Connect (powered by Benthos) to generate vector embeddings on streaming text. This session will walk through the architecture and configuration used to seamlessly integrate Redpanda Connect with LangChain, OpenAI, and MongoDB Atlas to build a complete Retrieval Augmented Generation data pipeline.

Do You Need a Multi-Model GenAI Strategy?

2024-09-18
Face To Face

Why Attend? You'll walk away with a comprehensive understanding of various GenAI models, their practical applications, and the strategies to harness their full potential responsibly.

• Discover the functionality of key GenAI models, including GPT, Gemini, and Open-source alternatives. 

• Learn how AI assistants, business process automation, co-pilots, and autonomous agents can work for your business. 

• Understand what each model excels at and where improvements are needed. We’ll provide a clear, comparative analysis to help you understand their capabilities and limitations. 

• Learn how we can ensure we are following responsible AI principles

• Explore the services and techniques that enhance the capability of GenAI. 

• Learn how to get AI to work for you and identify the skills your workforce needs going forward to excel in the era of AI.

Modularity and Composability for AI Systems: 3 ML Pipelines and the Truth

2024-09-18
Face To Face

In this talk, we will examine how to decompose AI systems into more manageable parts that then can be independently developed and tested, and then easily be composed together into an AI system. We will present a unified architecture for building batch, real-time, and LLM AI systems around 3 classes of machine learning pipelines: feature pipelines, training pipelines, and inference pipelines.

Just like you can make great music with 3 chords, we will show tens of examples of great AI systems built with our 3 ML pipelines (and the truth!).

We will show how our 3-pipeline architecture helps align teams and accelerates time to value and quality.

Bringing Enterprise GenAI to Life: Harnessing the Power of Dataiku’s LLM Mesh

2024-09-18
Face To Face

Looking to deliver safe, scalable, cost-effective, and future-proof LLM applications aligned with your operations and governance principles? Enter: The LLM Mesh. In this session, we’ll explore how Dataiku equips IT and Data teams to build secure, enterprise-ready GenAI applications, ensuring maximum control while delivering the high performance your business demands.