Async on-prem LLM-powered structured information extraction microservice
Find a file
Dirk Riemann 5e007b138d Address spec review — auth, timeouts, lifecycle, error codes
- FileRef type added so callers (mammon/Paperless) can pass Authorization
  headers alongside URLs. context.files is now list[str | FileRef].
- Job lifecycle state machine pinned down, including worker-startup sweep
  for rows stuck in 'running' after a crash.
- Explicit IX_002_000 / IX_002_001 codes for Ollama unreachable and
  structured-output schema violations, with per-call timeout
  IX_GENAI_CALL_TIMEOUT_SECONDS distinct from the per-job timeout.
- IX_000_007 code for file-fetch failures; per-file size, connect, and
  read timeouts configurable via env.
- ReliabilityStep: Literal-typed fields and None values explicitly skipped
  from provenance verification (with reason); dates parse both sides
  before ISO comparison.
- /healthz semantics pinned down (CUDA + Surya loaded; Ollama reachable
  AND model available). /metrics window is last 24h.
- (client_id, request_id) is UNIQUE in ix_jobs, matching the idempotency
  claim.
- Deploy-failure workflow uses `git revert` forward commit, not
  force-push — aligned with AGENTS.md habits.
- Dockerfile / compose require --gpus all. Pre-deploy requires
  `ollama pull gpt-oss:20b`; /healthz verifies before deploy completes.
- CI clarified: Forgejo Actions runners are GPU-less and LAN-disconnected;
  all inference is stubbed there. Real-Ollama tests behind IX_TEST_OLLAMA=1.
- Fixture redaction stance: synthetic-template PDF committed; real
  redacted fixtures live out-of-repo.
- Deferred list picks up use_case URL/Base64, callback retries,
  multi-container workers. quality_metrics retains reference-spec counters
  plus the two new MVP ones.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 10:28:43 +02:00
docs Address spec review — auth, timeouts, lifecycle, error codes 2026-04-18 10:28:43 +02:00
.gitignore Initial design: on-prem LLM extraction microservice MVP 2026-04-18 10:23:17 +02:00
AGENTS.md Initial design: on-prem LLM extraction microservice MVP 2026-04-18 10:23:17 +02:00
README.md Initial design: on-prem LLM extraction microservice MVP 2026-04-18 10:23:17 +02:00

InfoXtractor (ix)

Async, on-prem, LLM-powered structured information extraction microservice.

Given a document (PDF, image, text) and a named use case, ix returns a structured JSON result whose shape matches the use-case schema — together with per-field provenance (OCR segment IDs, bounding boxes, cross-OCR agreement flags) that let the caller decide how much to trust each extracted value.

Status: design phase. Implementation about to start.

Principles

  • On-prem always. LLM = Ollama, OCR = local engines (Surya first). No OpenAI / Anthropic / Azure / AWS / cloud.
  • Grounded extraction, not DB truth. ix returns best-effort fields + provenance; the caller decides what to trust.
  • Transport-agnostic pipeline core. REST + Postgres-queue adapters in parallel on one job store.