Async on-prem LLM-powered structured information extraction microservice
Find a file
Dirk Riemann 2efc4d1088
All checks were successful
tests / test (push) Successful in 1m13s
tests / test (pull_request) Successful in 1m23s
fix(genai): send format="json" (loose mode) to Ollama
Ollama 0.11.8 segfaults on any Pydantic-shaped structured-output schema
with $ref, anyOf, or pattern — confirmed on the deploy host with the
simplest MVP case (BankStatementHeader alone). The earlier null-stripping
sanitiser wasn't enough.

Switch to format="json", which is "emit valid JSON" mode. We're already
describing the exact JSON shape in the system prompt (via GenAIStep +
the use case's citation instruction appendix) and validating the
response body through Pydantic on parse — which raises IX_002_001 on
schema mismatch, exactly as before.

Stronger guarantees can come back later via a newer Ollama, an API
fix, or a different GenAIClient impl. None of that is needed for the
MVP to work end to end.

Unit tests: the sanitiser left in place (harmless, still tested). The
"happy path" test now asserts format == "json".

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:59:04 +02:00
.forgejo/workflows ci: run on every push (not just main) so feat branches also get CI 2026-04-18 10:40:44 +02:00
alembic feat(store): Alembic scaffolding + initial ix_jobs migration (spec §4) 2026-04-18 11:37:21 +02:00
docs fix(deploy): switch to network_mode: host — reach postgis + ollama on loopback 2026-04-18 13:00:02 +02:00
scripts chore(model): switch default IX_DEFAULT_MODEL to qwen3:14b (already on host) 2026-04-18 12:20:23 +02:00
src/ix fix(genai): send format="json" (loose mode) to Ollama 2026-04-18 13:59:04 +02:00
tests fix(genai): send format="json" (loose mode) to Ollama 2026-04-18 13:59:04 +02:00
.env.example fix(deploy): switch to network_mode: host — reach postgis + ollama on loopback 2026-04-18 13:00:02 +02:00
.gitignore feat(docker): Dockerfile (CUDA+python3.12) + compose with GPU reservation 2026-04-18 12:15:26 +02:00
.python-version feat(scaffold): project skeleton with uv + pytest + forgejo CI 2026-04-18 10:36:43 +02:00
AGENTS.md chore(model): switch default IX_DEFAULT_MODEL to qwen3:14b (already on host) 2026-04-18 12:20:23 +02:00
alembic.ini feat(store): Alembic scaffolding + initial ix_jobs migration (spec §4) 2026-04-18 11:37:21 +02:00
docker-compose.yml fix(compose): persist Surya + HF caches so rebuilds don't redownload models 2026-04-18 13:49:09 +02:00
Dockerfile fix(docker): include README.md in the uv sync COPY so hatchling finds it 2026-04-18 12:42:29 +02:00
pyproject.toml fix(deps): pin surya-ocr ^0.17 and drop cu124 index 2026-04-18 13:21:40 +02:00
README.md Initial design: on-prem LLM extraction microservice MVP 2026-04-18 10:23:17 +02:00
uv.lock fix(deps): pin surya-ocr ^0.17 and drop cu124 index 2026-04-18 13:21:40 +02:00

InfoXtractor (ix)

Async, on-prem, LLM-powered structured information extraction microservice.

Given a document (PDF, image, text) and a named use case, ix returns a structured JSON result whose shape matches the use-case schema — together with per-field provenance (OCR segment IDs, bounding boxes, cross-OCR agreement flags) that let the caller decide how much to trust each extracted value.

Status: design phase. Implementation about to start.

Principles

  • On-prem always. LLM = Ollama, OCR = local engines (Surya first). No OpenAI / Anthropic / Azure / AWS / cloud.
  • Grounded extraction, not DB truth. ix returns best-effort fields + provenance; the caller decides what to trust.
  • Transport-agnostic pipeline core. REST + Postgres-queue adapters in parallel on one job store.