How Operantis works

A five-step workflow that turns enquiries into audit-ready responses—with AI assistance and human oversight at every stage.

1

Case arrives

An enquiry enters the system—from email, a case management system, or manual entry. Operantis creates a case record and extracts the core question.

What gets captured
  • Original message and metadata (sender, date, subject)
  • Source system reference for traceability
  • Initial priority and category assignment
2

Evidence retrieved

The system searches relevant sources—legislation, regulations, guidance material, and previously approved responses—to find material that addresses the enquiry.

Evidence sources
  • Primary legislation and regulations
  • Policy documents and guidance material
  • Approved precedent responses (past decisions)
  • Internal knowledge base articles
Why this matters

Every piece of evidence gets a stable identifier and content hash. When the draft cites [E1], you can trace exactly what it's referencing—even months later.

3

Draft generated

AI drafts a response using only the retrieved evidence. Every claim links to a specific source. The draft includes the model version and prompt version for reproducibility.

What's recorded
  • Draft text with inline citation markers
  • Model identifier (e.g., Claude 3.5 Sonnet)
  • Prompt version and hash
  • Draft version number (v1, v2, etc.)
Citation integrity

If the AI generates a citation that doesn't match retrieved evidence, the system flags it. No hallucinated sources make it through to review.

4

Human review

A qualified reviewer reads the draft, checks citations against evidence, edits if needed, and makes a decision: approve, return for rework, or escalate.

Reviewer actions
  • Edit: Modify the draft directly (creates new version)
  • Approve: Confirm response is ready to send
  • Return: Send back for redrafting with notes
  • Escalate: Flag for senior review or specialist input
Approval requirements

Approval requires completing a checklist and providing a rationale. This isn't optional—the system enforces it to ensure defensible decision-making.

5

Audit pack created

On approval, the complete decision record is packaged: original enquiry, evidence set, draft versions, reviewer actions, and final response. Exportable as a single bundle.

The audit pack contains
  • Original case and all messages
  • Evidence items with content hashes and provenance
  • All draft versions with model/prompt metadata
  • Reviewer identity, rationale, and checklist attestations
  • Timestamps for every action
The compounding asset

Approved responses become precedents—searchable for future cases. Over time, the system gets faster and more consistent as your decision history grows.

Design principles

AI drafts, humans decide

The AI is an assistant, not an authority. Every response requires explicit human approval before it leaves the system.

No black boxes

Every claim traces to evidence. Every draft records its model and prompt. If someone asks "why did you say that?", the answer exists.

Consistency compounds

Approved responses become precedents for future cases. The more you use it, the more consistent and efficient the system becomes.

Built for scrutiny

Designed for environments where decisions face FOI requests, ombudsman complaints, and ministerial enquiries. The audit trail is the product.

See it in action

Walk through the reviewer interface with a sample regulatory case.

Explore the prototype Get in touch