Assigned tasks, tracked progress, documented builds. 5/8 complete.
Deploy local LLM teammates on Hercules via Ollama. Zero cloud tokens for heavy lifting.
✓ Scout (llama3.2:3b), Engineer (qwen2.5-coder:7b), Analyst (llama3.1:8b) — all running on RTX 3060.
Passive network monitoring dashboard. Listen only — no injection, no modification.
✓ Python + scapy capture daemon, vanilla JS dashboard, live at port 8765.
Central dashboard tracking all goals with progress, links to projects, and structured multi-page blogs.
✓ Rook HQ running on port 8766 — goals index, per-goal pages, multi-page blog posts with sidebar nav.
Benchmark free open models on Hercules. Quality and cost focus. Design test suite, publish results.
✓ 7 models tested. Winner: llama3.1:8b and gemma2:9b tied at 83%. DeepSeek-R1 7B surprised with only 33%. 32B offloaded scored same as 3B but at 0.6 tok/s.
Track cloud (Rook) vs local model usage. Monitor context window. Show ratio graph on dashboard. Stay hyper-aware of remaining think capacity.
✓ Dashboard panel live at HQ showing Pro plan %, context window, cache hit rate, and cloud/local ratio donut.
Train/deploy a large local model as deputy — handles complex reasoning work cheaply. Slow is fine, quality is mandatory. Follows from Goal 4 benchmark results.
Integrate with self-hosted GitLab at gitlab.fdhs.cz — read repos, manage issues, assist with code review and CI.
Integrate with self-hosted ERPNext at erpnext.fdhs.cz — data entry, process documentation, report generation.