Async on-prem LLM-powered structured information extraction microservice
Find a file
Dirk Riemann 2d22b704b7
All checks were successful
tests / test (push) Successful in 1m18s
tests / test (pull_request) Successful in 1m21s
feat(ui): queue position, elapsed time, filename, CPU-mode notice
Address user confusion from "pending" jobs with no explanation:

* FileRef.display_name — optional UI-only metadata so the UI can show
  the client-provided upload filename instead of the on-disk UUID. The
  pipeline ignores it for execution; older stored rows stay valid
  because the field is optional.
* jobs_repo.queue_position — returns (ahead, total_active) so the
  fragment can render "Queue position: N ahead" for pending jobs and
  "About to start" when N == 0. Terminal jobs return (0, 0).
* Fragment polish — status / queue / elapsed / result panels; live dot
  via CSS animation; CPU-mode details block when /healthz reports
  ocr_gpu: false. Elapsed time formats as MM:SS (running or finished).
* /healthz gains an additive ocr_gpu key (true/false/null). Existing
  postgres/ollama/ocr gating is unchanged. Surya records CUDA availability
  on warm_up; FakeOCRClient has no attribute and probes to None.
* Persistent header with "Upload a new extraction" link and copy-to-
  clipboard button for the current job id; <title> includes job id.

Tests: +7 unit + +6 integration. Full suite 321 green (272 unit, 49
integration). Lint clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 22:03:30 +02:00
.forgejo/workflows ci: run on every push (not just main) so feat branches also get CI 2026-04-18 10:40:44 +02:00
alembic feat(store): Alembic scaffolding + initial ix_jobs migration (spec §4) 2026-04-18 11:37:21 +02:00
docs feat(ui): queue position, elapsed time, filename, CPU-mode notice 2026-04-18 22:03:30 +02:00
scripts chore(model): switch default IX_DEFAULT_MODEL to qwen3:14b (already on host) 2026-04-18 12:20:23 +02:00
src/ix feat(ui): queue position, elapsed time, filename, CPU-mode notice 2026-04-18 22:03:30 +02:00
tests feat(ui): queue position, elapsed time, filename, CPU-mode notice 2026-04-18 22:03:30 +02:00
.env.example fix(deploy): switch to network_mode: host — reach postgis + ollama on loopback 2026-04-18 13:00:02 +02:00
.gitignore feat(docker): Dockerfile (CUDA+python3.12) + compose with GPU reservation 2026-04-18 12:15:26 +02:00
.python-version feat(scaffold): project skeleton with uv + pytest + forgejo CI 2026-04-18 10:36:43 +02:00
AGENTS.md feat(ui): queue position, elapsed time, filename, CPU-mode notice 2026-04-18 22:03:30 +02:00
alembic.ini feat(store): Alembic scaffolding + initial ix_jobs migration (spec §4) 2026-04-18 11:37:21 +02:00
docker-compose.yml chore(compose): pin project name to infoxtractor 2026-04-18 19:57:16 +02:00
Dockerfile fix(docker): include README.md in the uv sync COPY so hatchling finds it 2026-04-18 12:42:29 +02:00
pyproject.toml feat(ui): add browser UI at /ui for job submission 2026-04-18 21:27:54 +02:00
README.md feat(ui): queue position, elapsed time, filename, CPU-mode notice 2026-04-18 22:03:30 +02:00
uv.lock feat(ui): add browser UI at /ui for job submission 2026-04-18 21:27:54 +02:00

InfoXtractor (ix)

Async, on-prem, LLM-powered structured information extraction microservice.

Given a document (PDF, image, text) and a named use case, ix returns a structured JSON result whose shape matches the use-case schema — together with per-field provenance (OCR segment IDs, bounding boxes, cross-OCR agreement flags) that let the caller decide how much to trust each extracted value.

Status: MVP deployed. Live on the home LAN at http://192.168.68.42:8994 (REST API + browser UI at /ui).

Web UI

A minimal browser UI lives at http://192.168.68.42:8994/ui: drop a PDF, pick a registered use case or define one inline, submit, see the pretty-printed result. HTMX polls the job status every 2 s until the pipeline finishes. LAN-only, no auth.

Principles

  • On-prem always. LLM = Ollama, OCR = local engines (Surya first). No OpenAI / Anthropic / Azure / AWS / cloud.
  • Grounded extraction, not DB truth. ix returns best-effort fields + provenance; the caller decides what to trust.
  • Transport-agnostic pipeline core. REST + Postgres-queue adapters in parallel on one job store.

Submitting a job

curl -X POST http://192.168.68.42:8994/jobs \
  -H "Content-Type: application/json" \
  -d '{
    "use_case": "bank_statement_header",
    "ix_client_id": "mammon",
    "request_id": "some-correlation-id",
    "context": {
      "files": [{
        "url": "http://paperless.local/api/documents/42/download/",
        "headers": {"Authorization": "Token …"}
      }],
      "texts": ["<Paperless Tesseract OCR content>"]
    }
  }'
# → {"job_id":"…","ix_id":"…","status":"pending"}

Poll GET /jobs/{job_id} until status is done or error. Optionally pass callback_url to receive a webhook on completion (one-shot, no retry; polling stays authoritative).

Ad-hoc use cases

For one-offs where a registered use case doesn't exist yet, ship the schema inline:

{
  "use_case": "adhoc-invoice",        // free-form label (logs/metrics only)
  "use_case_inline": {
    "use_case_name": "Invoice totals",
    "system_prompt": "Extract vendor and total amount.",
    "fields": [
      {"name": "vendor", "type": "str", "required": true},
      {"name": "total",  "type": "decimal"},
      {"name": "currency", "type": "str", "choices": ["USD", "EUR", "CHF"]}
    ]
  },
  // ...ix_client_id, request_id, context...
}

When use_case_inline is set, the pipeline builds the response schema on the fly and skips the registry. Supported types: str, int, float, decimal, date, datetime, bool. choices is only allowed on str fields. Precedence: inline wins over use_case when both are present.

Full REST surface + provenance response shape documented in the MVP design spec.

Running locally

uv sync --extra dev
uv run pytest tests/unit -v                    # hermetic unit + integration suite
IX_TEST_OLLAMA=1 uv run pytest tests/live -v    # needs LAN access to Ollama + GPU

UI queue + progress UX

The /ui job page polls GET /ui/jobs/{id}/fragment every 2 s and surfaces:

  • Queue position while pending: "Queue position: N ahead — M jobs total in flight (single worker)" so it's obvious a new submission is waiting on an earlier job rather than stuck. "About to start" when the worker has just freed up.
  • Elapsed time while running ("Running for MM:SS") and on finish ("Finished in MM:SS").
  • Original filename — the UI stashes the client-provided upload name in FileRef.display_name so the browser shows your_statement.pdf instead of the on-disk UUID.
  • CPU-mode notice when /healthz reports ocr_gpu: false (the Surya OCR client observed torch.cuda.is_available() == False): a collapsed <details> pointing at the deployment runbook.

Deploying

git push server main      # rebuilds Docker image, restarts container, /healthz deploy gate
python scripts/e2e_smoke.py   # E2E acceptance against the live service

See docs/deployment.md for full runbook + rollback.