Don’t let vague language become binding commitments
Commitment Radar helps map where text can invite reliance beyond what it explicitly promises — so teams can review risk before disputes, audits, or regulatory reviews.
Most disputes don’t come from broken promises
They come from language that allowed different interpretations.
Marketing → Contract
“Our system typically processes within 24 hours”
LLM Output → Reliance
“I aim to provide accurate information”
Model Cards → Regulator
“We expect 95% accuracy in most cases”
Help Docs → Enterprise Dispute
“Updates usually ship within a week”
Here’s what we catch
We take reasonable measures to protect your data and aim to notify users of breaches within 72 hours. Our systems are designed for high availability and we expect minimal downtime.
Flagged phrases: - "reasonable measures" → undefined standards - "aim to notify" → aspirational vs guaranteed language - "within 72 hours" → qualified time boundaries - "designed for high availability" → intent vs outcome language - "expect minimal downtime" → predictive vs binding statements Reliance Risk summary: - Language invites reliance without firm commitments. Deterministic hash: 3d1c...e9a7 Ruleset version: facet_a_inference_mechanics v0.1 Timestamp: 2026-02-08T00:00:00Z
Deterministic interpretation, not judgment
What it does
- Records commitment boundaries in text
- Identifies hedge words that reduce obligation
- Maps where readers might reasonably over-infer
- Produces cryptographically signed, replayable results
- Tracks interpretation changes across versions
What it never does
- Judge whether claims are true
- Measure accuracy, safety, or performance
- Modify, block, or rewrite outputs
- Enforce policy or recommend actions
- Infer unstated intent
How Commitment Outcomes Are Classified
Commitment Radar maps how language shifts the reader’s reasonable interpretation of obligation. It does not score risk or judge truth.
View the 8 outcomes
Terms appear concrete but are later narrowed or left undefined.
Intent or opinion reads like a measurable external claim.
Descriptive language implies a guarantee or outcome.
Limited claims read as universal without clear boundaries.
Qualifications get lost and sound like guarantees.
Omissions create implied commitments next to strong claims.
Partial control reads as full responsibility.
No defined interpretation pattern was triggered.
Built for teams that publish at scale and can’t afford reinterpretation later
1. Text Produced
- LLM outputs
- Legal documents
- Marketing copy
- API responses
2. Interpret
- Commitment Radar
- Deterministic rules
- No human judgment
3. Preserve
- Signed record
- Version control
- Audit / dispute defense
- Pre-publish checks
- CI/CD snapshots
- Archival of language state
- Dispute replay with locked rulesets
- Diffing between document versions
Who needs this most
Teams publishing high-stakes language at scale
- AI product companies with massive LLM output volume
- SaaS companies with templated legal language
- Regulated entities with audit exposure
Secondary users
- Legal (evidentiary records)
- Compliance (language drift tracking)
- AI safety research (pattern analysis)
When teams use this
Request Demo API Key
Test it on your riskiest language before committing.
- Instant email delivery
- 100 requests/day
- No credit card
- Full API access (including versioning & diffs)
Run this on your text in under 5 minutes
Check whether your Terms of Service create unintended obligations.
# Python example (version tagging + signature + diff)
import requests
API_KEY = "assr_demo_..."
BASE_URL = "https://api.practice-wallet.com"
payload_v1 = {
"artifact_text": "Terms v1...",
"artifact_type": "FREE_TEXT",
"lens": { "lens_id": "literal_technical_reader", "lens_version": "v0.1" },
"ruleset": { "ruleset_id": "facet_a_inference_mechanics", "ruleset_version": "v0.1" }
}
payload_v2 = {
"artifact_text": "Terms v2...",
"artifact_type": "FREE_TEXT",
"lens": { "lens_id": "literal_technical_reader", "lens_version": "v0.1" },
"ruleset": { "ruleset_id": "facet_a_inference_mechanics", "ruleset_version": "v0.1" }
}
headers = { "Authorization": f"Bearer {API_KEY}" }
run_v1 = requests.post(f"{BASE_URL}/api/interpret", json=payload_v1, headers=headers).json()
run_v2 = requests.post(f"{BASE_URL}/api/interpret", json=payload_v2, headers=headers).json()
diff_payload = {
"artifact_identifier": "terms-of-service",
"analysis_run_id_a": run_v1["analysis_run_id"],
"analysis_run_id_b": run_v2["analysis_run_id"]
}
diff = requests.post(f"{BASE_URL}/api/diff", json=diff_payload, headers=headers).json()
print(diff)