talk-data.com talk-data.com

Topic

llms

102

tagged

Activity Trend

19 peak/qtr
2020-Q1 2026-Q1

Activities

102 activities · Newest first

This talk covers Grammarly's approach to using a combination of third-party LLM APIs and in-house LLMs, the role of LLMs in Grammarly's product offerings, an overview of the tools and processes used in our ML infrastructure, and how we address challenges such as access, cost control, and load testing of LLMs, sharing our experience in optimizing and serving LLMs.

Workshop led by Alexey Grigorev on building a chatbot using large language models with Python. Topics include data extraction from FAQs, knowledge base indexing, chatbot setup in a Jupyter Notebook, interfacing with LLMs, and implementing Retrieval-Augmented Generation (RAG).

Copilot for Microsoft 365 has the power to positively transform the modern workplace. Using LLMs and integrating your data and Microsoft 365 Apps, Copilot for M365 supports employees to be more productive, creative and collaborative. Pete will shared some of our experiences working with a number of organisations on launching Copilot for M365, including some of benefits and challenges it presents.

Running models locally on the CPU and possibly a GPU means we can experiment with the latest quantised models on real client data without anything leaving the machine. We can explore text question answering, image analysis and calling these tools via a Python API for rapid PoC experimentation. This quickly exposes the ways that LLMs go weird and maybe that helps us avoid some of the examples of early LLM deployments making embarrassing mistakes!

ChatGPT is awesome, but developing with its API comes at a cost. Fortunately, there are open-source alternatives like Google Gemini, Streamlit, and Python APIs that can fetch prompt results using an API key. In this presentation, I'll explore how to create a lightweight, self-service end-to-end LLMs application using prompt engineering and fine-tuning based on user requests. Additionally, I'll demonstrate how to build a food suggestion application based on ingredients or food names.

In this talk we’ll introduce the core concepts for building a “copilot” application on Azure AI from prompt engineering to LLM Ops – using the Contoso Chat application sample as a reference. And we’ll explore the Azure AI Studio (preview) platform from a code-first perspective to understand how you can streamline your development from model exploration to endpoint deployment, with a unified platform and workflow.

Postel's Law states that we should be liberal in what we accept and conservative in what we send. When working with code generated from LLMs, embracing this principle is even more important. Join us as we explore the ways that Ruby's flexibility makes this possible, why I think Ruby is a sleeping giant in the future of LLM-generated code, and the key to unlocking generative AI's true power for software development.

Hands-on workshop focusing on using Responsible AI tools to identify and mitigate issues that can negatively affect individuals or society. Participants will learn how to debug and mitigate ML model issues using error analysis, data analysis, model explainability, model performance and fairness assessment. Uses LLMs and traditional ML models. Prerequisites: Basic understanding of Python.

Hands-on learning on building and evaluating generative AI solutions with LLMs responsibly at scale. Learn to create visual executable flows linking LLMs, vector embeddings, prompts, and Python tools; evaluate performance metrics and responsible AI issues such as groundedness, hallucinations, and relevance. Pre-requisites: Basic understanding of Python.

A fireside chat between Hugo and Simon Willison exploring LLMs, GenAI, and democratizing data tools. They discuss what LLMs are capable of, the evolving ecosystem, running LLMs locally, and how Unix philosophy, Python, and LLMs can be combined into a productivity toolkit. Includes a live coding intro to Simon’s LLM CLI utility and Python library.

Wanna be able to read a paper from a domain you know nothing about ? Use our tool to get explanations of every term used in the paper. A concept is used in the explanation that you don’t know ? Go deeper again until you reach the explanation of for example addition or any simple concept that a 5 years old could understand, if he knows how to read of course ! Built using ChatGPT and other sources.

Synopsis: Embark on an enlightening journey with Noble as he tackles the challenges of integrating Large Language Models (LLMs) into enterprise environments. Understand the inherent unreliability of these models and explore innovative solutions, ranging from vector databases to prompt chaining, that aim to enhance the trustworthiness of LLMs in crucial applications.

Gen AI, LLMs, AI assistants and intelligent agents are powering next-generation customer experiences. But there is no AI without data. This session covers data platforms, governance, and cutting-edge vector search to enable enterprise AI.

Gen AI, LLMs, AI assistants and intelligent agents are powering next-generation customer experiences. But there is no AI without data. This session covers data platforms, governance, and cutting-edge vector search to enable enterprise AI.

We examine the capabilities and challenges of using Large Language Models (LLMs) in task-oriented dialogue settings, particularly situated dynamic Minecraft-like environments. Our work focuses on two interconnected aspects: using LLMs as Minecraft agents in builder and architect roles, and their ability to ask clarification questions in asynchronous instruction-giver/instruction-follower settings. To achieve this we prepared a new unified corpus that combines annotations for reference, ambiguity, and discourse structure, enabling systematic evaluation of clarification behavior. Through platform-based interaction and comparison with human data, we find notable differences: humans rarely ask clarification questions for referential ambiguity but often do for task uncertainty, while LLMs show the opposite tendency. We further explore whether LLMs’ question-asking behavior is influenced by their reasoning capabilities, observing that explicit reasoning increases both the frequency and relevance of clarification questions. Our findings highlight both the promise and current limitations of LLMs in handling ambiguity and improving interactive task performance.