Async on-prem LLM-powered structured information extraction microservice
Detailed, TDD-structured plan with 5 chunks covering ~30 feature-branch tasks from foundation scaffolding through first live deploy + E2E smoke. Each task is one PR; pipeline core comes hermetic-first, real Surya/Ollama clients in Chunk 4, containerization + first deploy in Chunk 5. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| docs | ||
| .gitignore | ||
| AGENTS.md | ||
| README.md | ||
InfoXtractor (ix)
Async, on-prem, LLM-powered structured information extraction microservice.
Given a document (PDF, image, text) and a named use case, ix returns a structured JSON result whose shape matches the use-case schema — together with per-field provenance (OCR segment IDs, bounding boxes, cross-OCR agreement flags) that let the caller decide how much to trust each extracted value.
Status: design phase. Implementation about to start.
- Full reference spec:
docs/spec-core-pipeline.md(aspirational; MVP is a strict subset) - MVP design:
docs/superpowers/specs/2026-04-18-ix-mvp-design.md - Agent / development notes:
AGENTS.md
Principles
- On-prem always. LLM = Ollama, OCR = local engines (Surya first). No OpenAI / Anthropic / Azure / AWS / cloud.
- Grounded extraction, not DB truth. ix returns best-effort fields + provenance; the caller decides what to trust.
- Transport-agnostic pipeline core. REST + Postgres-queue adapters in parallel on one job store.