We've all got used to using LLMs in our developer workflow - from asking ChatGPT what tools and libraries to use, to getting GitHub copilot to generate code for us. Great when you are online, but not so useful when you are offline, like on the tube or in a plane with no WiFi. But what if there was another way? In this session, Jim will introduce offline LLMs. We'll look at how you can run LLMs locally, such as Phi-2 from Microsoft, and add these to your developer workflow. We'll compare the performance of offline vs online, both the speed and quality, but also touch on privacy and other considerations. We'll also look at hardware requirements as we don't all have the latest GPUs to hand.
talk-data.com
Topic
phi-2
1
tagged
Activity Trend
1
peak/qtr
2020-Q1
2026-Q1