Running models locally on the CPU and possibly a GPU means we can experiment with the latest quantised models on real client data without anything leaving the machine. We can explore text question answering, image analysis and calling these tools via a Python API for rapid PoC experimentation. This quickly exposes the ways that LLMs go weird and maybe that helps us avoid some of the examples of early LLM deployments making embarrassing mistakes!
talk-data.com
Topic
quantised models
1
tagged
Activity Trend
1
peak/qtr
2020-Q1
2026-Q1