talk-data.com talk-data.com

Topic

embeddings

18

tagged

Activity Trend

4 peak/qtr
2020-Q1 2026-Q1

Activities

18 activities · Newest first

The growth of digital platforms has led to an explosion of data. Our aim in Orfium is to effectively process this ever-increasing volume of information. We achieve this by building and deploying cloud services designed to accurately track how music is used, how music rights are managed, and ensuring rightful payments to artists and rights Holders. In this talk we are going to explore techniques that bring our AI models to production, building and maintaining our services and overcoming cost and scale barriers. Attendees can anticipate a brief overview of our services, the tools that we are using to bring our models to production, the cloud architecture that makes all this possible and a deeper dive on how technologies like vectorDBs enabled us to reach the scale we have today without breaking the bank.

In the era of intelligent applications, the demand for scalable, real-time AI processing has never been higher. This session explores how leveraging an event-driven database like SurrealDB can transform your AI architecture. We will demonstrate how SurrealDB's native eventing features allow it to act as the central nervous system for your application, seamlessly integrating data with AI pipelines. Discover how to build resilient, loosely coupled systems that can trigger AI workflows—such as embedding generation, content summarization, and sentiment analysis—in real-time, directly in response to data changes. Leave this talk with a clear understanding of how to decouple your AI components and build smarter, more reactive applications.

In the era of intelligent applications, the demand for scalable, real-time AI processing has never been higher. This session explores how leveraging an event-driven database like SurrealDB can transform your AI architecture. We will demonstrate how SurrealDB's native eventing features allow it to act as the central nervous system for your application, seamlessly integrating data with AI pipelines. Discover how to build resilient, loosely coupled systems that can trigger AI workflows—such as embedding generation, content summarization, and sentiment analysis—in real-time, directly in response to data changes. Leave this talk with a clear understanding of how to decouple your AI components and build smarter, more reactive applications.

Cognee organizes your data into AI memory. It builds structured AI memory by transforming raw data into a modular, queryable knowledge graph powered by embeddings. Like any complex system, it depends on many hyperparameters that shape performance in subtle ways. This talk shows how systematic tuning can improve AI memory, what current evaluation methods reveal (and miss), and why future progress will depend as much on better evaluation and optimization as on new architectures.

In this hands-on workshop, you will learn how Knowledge Graphs and Retrieval Augmented Generation (RAG) can help GenAI projects avoid hallucination and provide access to reliable data. Topics include LLMs and hallucination, integrating knowledge graphs, GraphRAG, vector indexes and embeddings, querying graphs with natural language, and using Python and OpenAI to create GraphRAG retrievers and GenAI applications.

Unlock the power of AI agents—even if you’re just starting out. In this hands-on, beginner-friendly workshop, you'll go from understanding how Large Language Models (LLMs) work to building a real AI agent using Python, LangChain, and LangGraph. Live Demo: Your First AI Agent — follow along as we build an AI agent that retrieves, reasons, and responds using LangChain and LangGraph.

This project aims to develop an intelligent system using computer vision to identify individual jaguars by their unique facial and body patterns. A Vision Transformer (ViT) and advanced self-attention models will be used for segmentation and classification, with fine-tuned embeddings to enhance accuracy. The system will aid zoologists in tracking jaguars, especially after natural disasters, and will be deployed as an API for practical use.

Very few advancements in technology have been so disruptive like Open AI's Chat GPT: thanks to its capability of handling unstructured data, and the ease of customization of its behaviour, we can now bring our applications to a level never seen before.

This talk presents a look at Azure OpenAI from the point of view of an application developer and show how we can leverage all its horsepower in our ASP.NET Core and Blazor solution to provide functionalities which were simply unthinkable just a few months ago.

During this talk we'll demonstrate some practical examples of how to do that: as a first step, we'll familiarise with the GPT's deployment model and completion API, and then shift our usage model from a simple chat to something closer to a programmable AI model. We'll show how, simply engineering the requests, we can bend its behaviour to accomplish a whole sort of different tasks, and how using functions will allow us to integrate it with the rest of services our application exposes.

As a last step, we'll then be tackling integration with our data. On one end, we'll learn how to use embeddings and vector search over our datasets, and what are the benefits of it. Then, we'll combine GPT models with Azure Synapse to perform data analysis over big data files.

Hands-on workshop on building a search engine from scratch, focusing on text search and vector search. Topics include in-memory text search, tokenization and preprocessing, inverted index construction, embeddings, converting text to vectors, cosine similarity, and strategies to combine text and vector search. The session includes practical coding in a Jupyter Notebook using Python to implement both text and vector search approaches.

In this demo intensive session Alan will show you how to use Azure Open AI Service to build natural language solutions from scratch. He will explain basic concepts of natural language processing, such as tokens, embeddings, and transformers. He will demonstrate how to use the Azure Open AI Service portal to create and deploy natural language models using pre-trained or custom data. He will also show you how to use the Azure Open AI Service SDK to interact with the models programmatically and integrate them with other Azure services.

Learn how to use Google's PaLM APIs for text generation, chat, and embeddings. Through this workshop, users will be able to go through an introduction to each of these APIs and understand what types of machine learning tasks they can be used for. This workshop will also cover an introductory use case for the embeddings and text generation APIs: Document Search with Q & A. Links to code will be provided.