The home server's Ollama doesn't have gpt-oss:20b pulled; qwen3:14b is
already there and is what mammon's chat agent uses. Switching the default
now so the first deploy passes the /healthz ollama probe without an extra
`ollama pull` step. The spec lists gpt-oss:20b as a concrete example;
qwen3:14b is equally on-prem and Ollama-structured-output-compatible.
Touched: AppConfig default, BankStatementHeader Request.default_model,
.env.example, setup_server.sh ollama-list check, AGENTS.md, deployment.md,
live tests. Unit tests that hard-coded the old model string but don't
assert the default were left alone.
Also: ASCII en-dash in e2e_smoke.py Paperless-style text (ruff RUF001).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- scripts/setup_server.sh: idempotent one-shot. Creates bare repo,
post-receive hook (which rebuilds docker compose + gates on /healthz),
infoxtractor Postgres role + DB on the shared postgis container, .env
(0600) from .env.example with the password substituted in, verifies
gpt-oss:20b is pulled.
- docs/deployment.md: topology, one-time setup command, normal deploy
workflow, rollback-via-revert pattern (never force-push main),
operational checklists for the common /healthz degraded states.
- First deploy section reserved; filled in after Task 5.3 runs.
Task 5.2 of MVP plan.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>