You have a small, mostly static corpus (e.g., a few hundred to a few thousand chunks). You want zero‑infrastructure local retrieval with fast, predictable latency. You’re assembling “infinite few‑shot ...
The LLM race stopped being a close contest pretty quickly.