Commitment Radar API

A deterministic layer for understanding what language commits to -- and what it doesn't.

Why we built this

Modern software -- especially AI systems -- produces a lot of fluent, confident language.

Logs. Summaries. Model outputs. Docs. Status messages.

Most of the time, the problem isn't that this language is wrong.

It's that it can be reasonably interpreted as stronger, broader, or more reliable than the source actually supports.

That gap -- between what's written and what a reader might rely on -- causes real issues:

We built Commitment Radar to make that gap visible -- without judging correctness, intent, or truth.


What Commitment Radar does

Commitment Radar is an output interpretation API.

You send it text that your system already produced -- and it returns a structured, deterministic map of:

It does not:

Every result is rule-based, replayable, and auditable.

Same input. Same rules. Same output. Every time.


How it works (at a high level)

  1. You generate text (model output, log line, summary, documentation, etc.)
  2. You send it to the Commitment Radar API along with a lens and ruleset.
  3. The API interprets the text deterministically using a fixed set of transparent rules.
  4. You receive structured "commitment boundary" records that explain where reliance is justified -- and where it isn't.

Nothing is inferred from vendors, models, or sources.

Nothing changes based on who sent the text.


Why this is different

Most systems try to judge language.

Commitment Radar does something simpler -- and more reliable:

It shows how language can be interpreted, not whether it's correct.

That makes it:

It's closer to a compiler or linter than a moderator or reviewer.


Who uses Commitment Radar


What you get

Just clear boundaries around what language supports -- and what it doesn't.


In one sentence

Commitment Radar helps you understand where language creates expectations -- without deciding what's true or changing how your system behaves.