Practice Wallet Labs

Don’t let vague language become binding commitments

Commitment Radar helps map where text can invite reliance beyond what it explicitly promises — so teams can review risk before disputes, audits, or regulatory reviews.

Deterministic — same input, same output, every timeNo AI guesswork — rule-based interpretation onlyAudit-ready — cryptographically signed resultsVersion-controlled — track changes across releases

Most disputes don’t come from broken promises

They come from language that allowed different interpretations.

Marketing → Contract

Our system typically processes within 24 hours

Marketing copy → SLA dispute
Claimed breach of commitment

LLM Output → Reliance

I aim to provide accurate information

Chatbot response → user reliance
Plaintiff alleges promise of accuracy

Model Cards → Regulator

We expect 95% accuracy in most cases

Transparency doc → regulatory review
"Most cases" treated as guarantee

Help Docs → Enterprise Dispute

Updates usually ship within a week

Old support doc → contract claim
No defensible meaning of "usually"
In each case, the language was careful. The interpretation wasn’t.

Here’s what we catch

Example Input
We take reasonable measures to protect your data and aim to
notify users of breaches within 72 hours. Our systems are
designed for high availability and we expect minimal downtime.
Example Output
Flagged phrases:
- "reasonable measures" → undefined standards
- "aim to notify" → aspirational vs guaranteed language
- "within 72 hours" → qualified time boundaries
- "designed for high availability" → intent vs outcome language
- "expect minimal downtime" → predictive vs binding statements

Reliance Risk summary:
- Language invites reliance without firm commitments.

Deterministic hash: 3d1c...e9a7
Ruleset version: facet_a_inference_mechanics v0.1
Timestamp: 2026-02-08T00:00:00Z

Deterministic interpretation, not judgment

What it does

  • Records commitment boundaries in text
  • Identifies hedge words that reduce obligation
  • Maps where readers might reasonably over-infer
  • Produces cryptographically signed, replayable results
  • Tracks interpretation changes across versions

What it never does

  • Judge whether claims are true
  • Measure accuracy, safety, or performance
  • Modify, block, or rewrite outputs
  • Enforce policy or recommend actions
  • Infer unstated intent
Commitment Radar doesn’t tell you what to write. It tells you what you wrote — from a reliance perspective.

How Commitment Outcomes Are Classified

Commitment Radar maps how language shifts the reader’s reasonable interpretation of obligation. It does not score risk or judge truth.

View the 8 outcomes
Definition narrowing

Terms appear concrete but are later narrowed or left undefined.

Subjective → objective

Intent or opinion reads like a measurable external claim.

Evaluation → assurance

Descriptive language implies a guarantee or outcome.

Scope collapse

Limited claims read as universal without clear boundaries.

Conditional → absolute

Qualifications get lost and sound like guarantees.

Absence → presence

Omissions create implied commitments next to strong claims.

Control → total authority

Partial control reads as full responsibility.

None

No defined interpretation pattern was triggered.

Outcome categories describe interpretation patterns. They are not severity levels, risk scores, or recommendations.
View full ruleset →

Built for teams that publish at scale and can’t afford reinterpretation later

1. Text Produced

  • LLM outputs
  • Legal documents
  • Marketing copy
  • API responses

2. Interpret

  • Commitment Radar
  • Deterministic rules
  • No human judgment

3. Preserve

  • Signed record
  • Version control
  • Audit / dispute defense
  • Pre-publish checks
  • CI/CD snapshots
  • Archival of language state
  • Dispute replay with locked rulesets
  • Diffing between document versions

Who needs this most

Teams publishing high-stakes language at scale

  • AI product companies with massive LLM output volume
  • SaaS companies with templated legal language
  • Regulated entities with audit exposure

Secondary users

  • Legal (evidentiary records)
  • Compliance (language drift tracking)
  • AI safety research (pattern analysis)

When teams use this

Pre-display LLM checks
Regulatory review defense
Terms of Service diffing
Dispute evidence replay

Request Demo API Key

Test it on your riskiest language before committing.

  • Instant email delivery
  • 100 requests/day
  • No credit card
  • Full API access (including versioning & diffs)

Run this on your text in under 5 minutes

Check whether your Terms of Service create unintended obligations.

# Python example (version tagging + signature + diff)
import requests

API_KEY = "assr_demo_..."
BASE_URL = "https://api.practice-wallet.com"

payload_v1 = {
  "artifact_text": "Terms v1...",
  "artifact_type": "FREE_TEXT",
  "lens": { "lens_id": "literal_technical_reader", "lens_version": "v0.1" },
  "ruleset": { "ruleset_id": "facet_a_inference_mechanics", "ruleset_version": "v0.1" }
}

payload_v2 = {
  "artifact_text": "Terms v2...",
  "artifact_type": "FREE_TEXT",
  "lens": { "lens_id": "literal_technical_reader", "lens_version": "v0.1" },
  "ruleset": { "ruleset_id": "facet_a_inference_mechanics", "ruleset_version": "v0.1" }
}

headers = { "Authorization": f"Bearer {API_KEY}" }

run_v1 = requests.post(f"{BASE_URL}/api/interpret", json=payload_v1, headers=headers).json()
run_v2 = requests.post(f"{BASE_URL}/api/interpret", json=payload_v2, headers=headers).json()

diff_payload = {
  "artifact_identifier": "terms-of-service",
  "analysis_run_id_a": run_v1["analysis_run_id"],
  "analysis_run_id_b": run_v2["analysis_run_id"]
}

diff = requests.post(f"{BASE_URL}/api/diff", json=diff_payload, headers=headers).json()
print(diff)

Documentation

Built for auditability

Deterministic
Cryptographically signed
Version-controlled rulesets