Shu Zhao discusses a personal AI-powered assistant running entirely on edge hardware; she demonstrates a tool that reduces screen time by taking notes and handling messages, routing processed information to Notion or GitHub. Tech stack includes RedPanda for message streaming and storage; Ollama running Llama 3.1 8B for local reasoning; and Jetson Nano as the compact hardware powering it.
talk-data.com
Topic
ollama
2
tagged
Activity Trend
1
peak/qtr
2020-Q1
2026-Q1
In this tutorial, we describe and show every step in building your own local RAG application using Milvus, LangChain, Ollama and Llama 3.x. At the end of the talk, you will have a working RAG application. We will also cover tips and techniques to upgrade to more advanced RAG apps.