This session presents the paper 'Talking to Patient Records,' an advanced Retrieval-Augmented Generation (RAG) chatbot designed to enhance healthcare information retrieval by integrating natural language processing with domain-specific knowledge bases to allow clinicians, researchers, and administrators to query patient records conversationally. By combining large language models with RAG techniques, the chatbot delivers accurate, context-aware, and secure responses, reducing the time required to locate critical patient information. The study outlines the system’s architecture, implementation, and potential applications in clinical decision support, patient engagement, and healthcare data management.
talk-data.com
Topic
large language models (llms)
20
tagged
Activity Trend
Top Events
In the news, GenAI is usually associated with large language models (LLMs) or with image generation tools, essentially, machines that can learn from text or images and generate text or images. But in reality, these models can learn from many different types of data. In particular, they can learn from time series of asset returns, which is perhaps the most relevant for asset managers. In this talk, and in our accompanying book (Generative AI for Trading and Asset Management), we highlight both the practical applications and the fundamental principles of GenAI, with a special focus on how these technologies apply to trading and asset management.
In this session, the PyMC Labs team explores how Large Language Models (LLMs) and their Semantic Similarity Rating (SSR) methodology can replicate human purchase intent with up to 90% of human test-retest reliability. Building on this research, we’ll introduce the Synthetic Consumers Platform—a breakthrough in simulated customer insights that generates reliable, human-like survey data at scale and in a fraction of the time. You’ll learn how synthetic consumers and SSR can transform customer research by enabling faster iteration and significantly reducing costs, and how an AI-driven insights platform accelerates testing, validation, and decision-making in marketing research and creative testing.
In this session, we’ll show how to structure and deliver the right context to large language models (LLMs) so they can actually reason through tasks - not just retrieve answers. We'll show practical ways to provide context across prompts and tools, using a Model Context Repository to make your AI apps much smarter.
Overview of how LLMs work in medical workflows, how AI agents extend LLMs, and real-world applications in healthcare such as clinical note summarization, literature review, patient communication, and diagnosis support.
Live-demo showing automated testing of large language models. Addresses non-determinism in ML systems and demonstrates how a second LLM can act as a judge. Also explores Retrieval Augmented Generation (RAG) for querying documents and guiding tests.
Put your prompt-writing skills to the test in a friendly, fast-paced contest. You’ll work with a ring-fenced large language model (LLM) and a shared dataset, racing to surface the right answers as the questions get tougher. Think of it as a pub quiz for data folk – but the questions are answered with code-like prompts. Quick briefing – we’ll show you the dataset, the rules and a few prompt-engineering tips. Answer the questions – each round ups the difficulty, challenging you to refine, chain or re-use prompts in inventive ways. Leaderboard & prizes – points for accuracy and ingenuity. Top spot takes home bragging rights and a tidy prize.
Abstract: We will navigate through the alignment challenges and safety considerations of LLMs, addressing both their limitations and capabilities, particularly focusing on techniques related to instruction prefix tuning and their theoretical limitations toward alignment. Additionally, I will discuss fairness across languages in common tokenizers used in LLMs. Finally, I will address safety considerations for agentic systems, illustrating methods to compromise their safety by exploiting seemingly minor changes, such as altering the desktop background to generate a chain of sequenced harmful actions. I will also explore the transferability of these vulnerabilities across different agents.
In this session you will discover how to create, and fine-tune prompts for a diverse range of AI models on AWS, leveraging fundamental principles of prompt engineering. Whether you're a seasoned AI enthusiast or just stepping into the realm of artificial intelligence, this session promises to equip you with fundamental concepts and practical techniques to enhance your prompt engineering skills. What you will learn: Foundation models and large language models; Key concepts of prompt engineering; Basic prompt techniques; Zero-shot prompting; Few-shot prompting; Chain-of-thought prompting.
Discover HybridAGI, a groundbreaking Cypher-based neuro-symbolic AI designed to make AI adaptable, reliable, and knowledge-driven. With the power of graph technology, HybridAGI goes beyond traditional Agent systems by using a graph interpreter that dynamically reads, writes, and executes graph-based data, enabling self-programming and on-the-fly adaptation. Tune in to explore how HybridAGI combines graph and LLM capabilities, allowing users to create secure, memory-centric AI applications with deterministic behaviour and extensive customization through Cypher.
Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education, and the development of shared social truths in democratic societies. LLMs produce responses that are plausible, helpful, and confident but that contain factual inaccuracies, inaccurate summaries, misleading references, and biased information. These subtle mistruths are poised to cause a severe cumulative degradation and homogenisation of knowledge over time.
This talk examines the existence and feasibility of a legal duty for LLM providers to create models that “tell the truth.” LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes. Careless speech is defined and contrasted with the simplified concept of “ground truth” in LLMs and prior discussion of related truth-related risks in LLMs including hallucinations, misinformation, and disinformation. EU human rights law and liability frameworks contain some truth-related obligations for products and platforms, but they are relatively limited in scope and sectoral reach.
The talk concludes by proposing a pathway to create a legal truth duty applicable to providers of both narrow- and general-purpose LLMs, and discusses “zero-shot translation” as a prompting method to constrain LLMs and better align their outputs with verified, truthful information.
Large Language Models excel in answering medical exam questions, but applying them to real-world problems often requires more guidance and fine-tuning for precise results. In this lecture, I'll review existing methods for effectively applying LLMs to medical question answering tasks and demonstrate how to use Azure's Text Analytics for Health to improve their performance and effectively evaluate the results they generate.
This talk dives into the evaluation methods of Large Language Models (LLMs). We'll explore the significance of LLM assessment and unveil core capabilities, challenges, and methods used to ensure their effectiveness and responsible development.
The new Microsoft Teams AI library simplifies the integration of Large Language Models (LLM) into Teams applications, enabling you to build intelligent, conversational apps in the flow of work of your users. Learn how to build conversational apps with Teams Toolkit and explore the full range of capabilities of the AI library to help you build AI-powered apps easily, responsibly, and providing a consistent natural language user experience. Reimagine a new era of intelligent apps in Teams.
Abstract: In the rapidly evolving landscape of AI, Large Language Models (LLMs) have outgrown their initial niche of powering chat bots. Today, we can use generative AI not only in enriching human-machine interactions but also in integrating sophisticated capabilities into business applications. This presentation will explore the nuanced process of customising LLMs for enterprise use, highlighting the importance of prompt engineering, in-context learning (ICL), and equipping the models with a diverse toolkit to ensure their responses are tuned to the application needs and to enable building reliable and responsible generative AI applications.
Abstract: Generative AI will open up many new opportunities to improve and evolve businesses and their processes. It will create new ways to interact with business applications, accelerate development, and accelerate the integration of smarter, more autonomous AI capabilities. To make this vision a reality, we have all embarked on an extensive learning journey. In this talk, I want to share some of our early learnings from infusing applications with LLM and give a glimpse of what we are working on.
Large language models (LLMs) have achieved impressive performance in many domains, including code generation and reasoning. However, to accomplish challenging tasks, generating the correct solution in one go becomes challenging. In this talk, I will first discuss our work self-debugging, which instructs LLMs to debug their own predicted programs. In particular, we demonstrate that self-debugging can teach LLMs to perform rubber duck debugging; i.e., without any human feedback on the code correctness or error messages, the model is able to identify its mistakes by investigating the execution results and explaining the generated code in natural language. Self-debugging notably improves both the model performance and sample efficiency, matching or outperforming baselines that generate more than 10× candidate programs. In the second part, I will further demonstrate that LLMs can also improve their own prompts to achieve better performance, acting as optimizers.
Explore how Large Language Models (LLMs) are bridging the gap between human thought processes and discrete digital operations. Allen will provide insights on using LLMs for tasks such as SQL queries, creating search embeddings, and generating human-friendly responses.
Since the release of ChatGPT late last year, the world has finally embraced vector embeddings and many organisations (from hedge funds to giant retailers) have been experimenting with vector databases. This is because vector embeddings, a component at the heart of large language models, open-up the ability to not only compress information but also to drastically transform search and knowledge retrieval. In this session we will put a spotlight on the embedding revolution that has taken over natural language processing, computer vision, network science and explain how enterprises can build better systems to understand, interact with, and sell to their customers.
Scientific discovery hinges on the effective integration of metadata, which refers to a set of 'cognitive' operations such as determining what information is relevant for inquiry, and data, which encompasses physical operations such as observation and experimentation. This talk introduces the Causal Modelling Agent (CMA), a novel framework that synergizes the metadata-based reasoning capabilities of Large Language Models (LLMs) with the data-driven modelling of Deep Structural Causal Models (DSCMs) for the task of causal discovery. We evaluate the CMA's performance on a number of benchmarks, as well as on the real-world task of modelling the clinical and radiological phenotype of Alzheimer's Disease (AD). Our experimental results indicate that the CMA can outperform previous data-driven or metadata-driven approaches to causal discovery. In our real-world application, we use the CMA to derive new insights into the causal relationships among biomarkers of AD.