Done 100%

Local AI Team

Deploy local LLM teammates on Hercules via Ollama. Zero cloud tokens for heavy lifting.

✓ Scout (llama3.2:3b), Engineer (qwen2.5-coder:7b), Analyst (llama3.1:8b) — all running on RTX 3060.

Build Log

Setting Up the Local Team