talk-data.com
Keeping AI Honest
Description
While there has been much excitement about the potential of large language models (LLMs) to automate tasks that previously required human intelligence or creativity, many early projects have failed because of LLMs’ innate willingness to lie. This presentation explores these “hallucination” issues and proposes a solution.
By combining generative AI with more traditional symbolic computation, reliability can be maintained, explainability improved, and private knowledge and data injected. This talk will show simple examples of combining language-based thinking with computational thinking to generate solutions that neither could achieve on its own.
An example application of an AI scientific research assistant will be shown that brings together the ideas presented in a most demanding real-world task, where false information is not acceptable. This is a fast-evolving space with enormous potential—and we’re just getting started.