|
|
||
|---|---|---|
| .forgejo/workflows | ||
| alembic | ||
| docs | ||
| scripts | ||
| src/ix | ||
| tests | ||
| .env.example | ||
| .gitignore | ||
| .python-version | ||
| AGENTS.md | ||
| alembic.ini | ||
| docker-compose.yml | ||
| Dockerfile | ||
| pyproject.toml | ||
| README.md | ||
| uv.lock | ||
InfoXtractor (ix)
Async, on-prem, LLM-powered structured information extraction microservice.
Given a document (PDF, image, text) and a named use case, ix returns a structured JSON result whose shape matches the use-case schema — together with per-field provenance (OCR segment IDs, bounding boxes, cross-OCR agreement flags) that let the caller decide how much to trust each extracted value.
Status: MVP deployed. Live on the home LAN at http://192.168.68.42:8994 (REST API + browser UI at /ui).
Web UI
A minimal browser UI lives at http://192.168.68.42:8994/ui: drop a PDF, pick a registered use case or define one inline, submit, see the pretty-printed result. HTMX polls the job status every 2 s until the pipeline finishes. LAN-only, no auth.
Past submissions are browsable at /ui/jobs — a paginated list (newest first) with status + client_id filters. Each row links to /ui/jobs/{job_id} for the full request/response view.
- Full reference spec:
docs/spec-core-pipeline.md(aspirational; MVP is a strict subset) - MVP design:
docs/superpowers/specs/2026-04-18-ix-mvp-design.md - Implementation plan:
docs/superpowers/plans/2026-04-18-ix-mvp-implementation.md - Deployment runbook:
docs/deployment.md - Agent / development notes:
AGENTS.md
Principles
- On-prem always. LLM = Ollama, OCR = local engines (Surya first). No OpenAI / Anthropic / Azure / AWS / cloud.
- Grounded extraction, not DB truth. ix returns best-effort fields + provenance; the caller decides what to trust.
- Transport-agnostic pipeline core. REST + Postgres-queue adapters in parallel on one job store.
Submitting a job
curl -X POST http://192.168.68.42:8994/jobs \
-H "Content-Type: application/json" \
-d '{
"use_case": "bank_statement_header",
"ix_client_id": "mammon",
"request_id": "some-correlation-id",
"context": {
"files": [{
"url": "http://paperless.local/api/documents/42/download/",
"headers": {"Authorization": "Token …"}
}],
"texts": ["<Paperless Tesseract OCR content>"]
}
}'
# → {"job_id":"…","ix_id":"…","status":"pending"}
Poll GET /jobs/{job_id} until status is done or error. Optionally pass callback_url to receive a webhook on completion (one-shot, no retry; polling stays authoritative).
Ad-hoc use cases
For one-offs where a registered use case doesn't exist yet, ship the schema inline:
{
"use_case": "adhoc-invoice", // free-form label (logs/metrics only)
"use_case_inline": {
"use_case_name": "Invoice totals",
"system_prompt": "Extract vendor and total amount.",
"fields": [
{"name": "vendor", "type": "str", "required": true},
{"name": "total", "type": "decimal"},
{"name": "currency", "type": "str", "choices": ["USD", "EUR", "CHF"]}
]
},
// ...ix_client_id, request_id, context...
}
When use_case_inline is set, the pipeline builds the response schema on the fly and skips the registry. Supported types: str, int, float, decimal, date, datetime, bool. choices is only allowed on str fields. Precedence: inline wins over use_case when both are present.
Full REST surface + provenance response shape documented in the MVP design spec.
Running locally
uv sync --extra dev
uv run pytest tests/unit -v # hermetic unit + integration suite
IX_TEST_OLLAMA=1 uv run pytest tests/live -v # needs LAN access to Ollama + GPU
UI jobs list
GET /ui/jobs renders a paginated, newest-first table of submitted jobs. Query params:
status=pending|running|done|error— repeat for multi-select.client_id=<str>— exact match (e.g.ui,mammon).limit=<n>(default 50, max 200) +offset=<n>for paging.
Each row shows status badge, original filename (FileRef.display_name or URL basename), use case, client id, submitted time + relative, and elapsed wall-clock (terminal rows only). Each row links to /ui/jobs/{job_id} for the full response JSON.
UI queue + progress UX
The /ui job page polls GET /ui/jobs/{id}/fragment every 2 s and surfaces:
- Queue position while pending: "Queue position: N ahead — M jobs total in flight (single worker)" so it's obvious a new submission is waiting on an earlier job rather than stuck. "About to start" when the worker has just freed up.
- Elapsed time while running ("Running for MM:SS") and on finish ("Finished in MM:SS").
- Original filename — the UI stashes the client-provided upload name in
FileRef.display_nameso the browser showsyour_statement.pdfinstead of the on-disk UUID. - CPU-mode notice when
/healthzreportsocr_gpu: false(the Surya OCR client observedtorch.cuda.is_available() == False): a collapsed<details>pointing at the deployment runbook.
Deploying
git push server main # rebuilds Docker image, restarts container, /healthz deploy gate
python scripts/e2e_smoke.py # E2E acceptance against the live service
See docs/deployment.md for full runbook + rollback.