Async on-prem LLM-powered structured information extraction microservice
Find a file
Dirk Riemann abee9cea7b
All checks were successful
tests / test (push) Successful in 1m14s
tests / test (pull_request) Successful in 1m10s
feat(pipeline): GenAIStep — LLM call + provenance mapping (spec §6.3, §7, §9.2)
Assembles the prompt, picks the structured-output schema, calls the
injected GenAIClient, and maps any emitted segment_citations into
response.provenance. Reliability flags stay None here; ReliabilityStep
fills them in Task 2.7.

- System prompt = use_case.system_prompt + (provenance-on) the verbatim
  citation instruction from spec §9.2.
- User text = SegmentIndex.to_prompt_text([p1_l0] style) when provenance
  is on, else plain OCR flat text + texts joined.
- Response schema = UseCaseResponse directly, or a runtime
  create_model("ProvenanceWrappedResponse", result=(UCR, ...),
  segment_citations=(list[SegmentCitation], Field(default_factory=list)))
  when provenance is on.
- Model = request override -> use-case default.
- Failure modes: httpx / connection / timeout errors -> IX_002_000;
  pydantic.ValidationError -> IX_002_001.
- Writes ix_result.result + ix_result.meta_data (model_name +
  token_usage); builds response.provenance via
  map_segment_refs_to_provenance when provenance is on.

17 unit tests in tests/unit/test_genai_step.py cover validate
(ocr_only skip, empty -> IX_001_000, text-only, ocr-text path), process
happy path, system-prompt shape with/without citation instruction, user
text tagged vs. plain, response schema plain vs. wrapped, provenance
mapping, error mapping (IX_002_000 + IX_002_001), and model selection
(request override + use-case default).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:18:44 +02:00
.forgejo/workflows ci: run on every push (not just main) so feat branches also get CI 2026-04-18 10:40:44 +02:00
docs Implementation plan for ix MVP 2026-04-18 10:34:30 +02:00
src/ix feat(pipeline): GenAIStep — LLM call + provenance mapping (spec §6.3, §7, §9.2) 2026-04-18 11:18:44 +02:00
tests feat(pipeline): GenAIStep — LLM call + provenance mapping (spec §6.3, §7, §9.2) 2026-04-18 11:18:44 +02:00
.env.example feat(scaffold): project skeleton with uv + pytest + forgejo CI 2026-04-18 10:36:43 +02:00
.gitignore feat(scaffold): project skeleton with uv + pytest + forgejo CI 2026-04-18 10:36:43 +02:00
.python-version feat(scaffold): project skeleton with uv + pytest + forgejo CI 2026-04-18 10:36:43 +02:00
AGENTS.md Initial design: on-prem LLM extraction microservice MVP 2026-04-18 10:23:17 +02:00
pyproject.toml feat(scaffold): project skeleton with uv + pytest + forgejo CI 2026-04-18 10:36:43 +02:00
README.md Initial design: on-prem LLM extraction microservice MVP 2026-04-18 10:23:17 +02:00
uv.lock feat(scaffold): project skeleton with uv + pytest + forgejo CI 2026-04-18 10:36:43 +02:00

InfoXtractor (ix)

Async, on-prem, LLM-powered structured information extraction microservice.

Given a document (PDF, image, text) and a named use case, ix returns a structured JSON result whose shape matches the use-case schema — together with per-field provenance (OCR segment IDs, bounding boxes, cross-OCR agreement flags) that let the caller decide how much to trust each extracted value.

Status: design phase. Implementation about to start.

Principles

  • On-prem always. LLM = Ollama, OCR = local engines (Surya first). No OpenAI / Anthropic / Azure / AWS / cloud.
  • Grounded extraction, not DB truth. ix returns best-effort fields + provenance; the caller decides what to trust.
  • Transport-agnostic pipeline core. REST + Postgres-queue adapters in parallel on one job store.