Async on-prem LLM-powered structured information extraction microservice
Builds the ID <-> on-page-anchor map used by both the GenAIStep (to emit the segment-tagged user message) and the provenance mapper (to resolve LLM-cited IDs back to bbox/text/file_index). Design notes: - `build()` is a classmethod so the pipeline constructs the index in one place (OCRStep) and passes the constructed instance along in the internal context. No mutable global state; tests build indexes inline from fake OCR fixtures. - Per-page metadata (file_index) arrives via a parallel `list[PageMetadata]` rather than being smuggled into OCRResult. Keeps segmentation decoupled from ingestion — the OCR engine legitimately doesn't know which file a page came from. - Page-tag lines (`<page …>` / `</page>`) are filtered via a regex so the LLM can never cite them as provenance. `line_idx_in_page` increments only for real lines so the IDs stay dense (p1_l0, p1_l1, ...). - Bounding-box normalisation divides x-coords by page width, y-coords by page height. Zero dimensions (defensive) pass through unchanged. - `to_prompt_text(context_texts=[...])` appends paperless-style texts untagged, separated from the tagged body by a blank line (spec §7.2b). Deterministic for prompt caching. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .forgejo/workflows | ||
| docs | ||
| src/ix | ||
| tests | ||
| .env.example | ||
| .gitignore | ||
| .python-version | ||
| AGENTS.md | ||
| pyproject.toml | ||
| README.md | ||
| uv.lock | ||
InfoXtractor (ix)
Async, on-prem, LLM-powered structured information extraction microservice.
Given a document (PDF, image, text) and a named use case, ix returns a structured JSON result whose shape matches the use-case schema — together with per-field provenance (OCR segment IDs, bounding boxes, cross-OCR agreement flags) that let the caller decide how much to trust each extracted value.
Status: design phase. Implementation about to start.
- Full reference spec:
docs/spec-core-pipeline.md(aspirational; MVP is a strict subset) - MVP design:
docs/superpowers/specs/2026-04-18-ix-mvp-design.md - Agent / development notes:
AGENTS.md
Principles
- On-prem always. LLM = Ollama, OCR = local engines (Surya first). No OpenAI / Anthropic / Azure / AWS / cloud.
- Grounded extraction, not DB truth. ix returns best-effort fields + provenance; the caller decides what to trust.
- Transport-agnostic pipeline core. REST + Postgres-queue adapters in parallel on one job store.