Skip to content
scsiwyg
sign insign up
get startedhow it worksmcpscsiblogcommunityapiplaygroundswaggersign insign up
โ† Claude Skills Libraryยท05. Measuring whether your UX actually works16 Apr 2026David Olsson

05. Measuring whether your UX actually works

#use-case#ux#orchestrator

David OlssonDavid Olsson

Measuring whether your UX actually works

You shipped the feature. The docs exist. The API responds. By every engineering metric, the product works. But users aren't getting productive. They sign up, poke around, and leave. Or they stay but never reach the thing you built for them. The code works. The experience doesn't.

This is the gap that engineering metrics can't see. Test coverage doesn't measure whether a user can find the getting-started guide. Uptime doesn't measure whether the error message told them what to do next. Response time doesn't measure whether the API's mental model matches the user's mental model.

The three questions nobody asks together

Most teams evaluate their product from one angle at a time. A developer reads the code and sees capabilities. A writer reads the docs and sees content. A designer uses the product and sees interactions. Each perspective is correct and incomplete.

The real picture lives at the intersection of three questions:

What does the code say? Features, capabilities, error handling, API surface. What the product can actually do, including things nobody documented and things that are documented but don't work as described.

What does the content say? Docs, README, error messages, CLI help text, marketing copy, changelog. What the product claims to do and how it guides (or fails to guide) the user toward doing it.

What does the UX say? Workflows, onboarding, persona fit, headless experience. What the user actually experiences when they try to accomplish something.

When these three align, users succeed. When they diverge โ€” features the code supports but content doesn't mention, content promises the code doesn't fulfill, experiences that are technically functional but practically painful โ€” users struggle.

The /ux-content-audit pipeline examines all three simultaneously and shows you exactly where they converge and where they don't.

Seven analyzers, one scorecard

The pipeline runs seven specialist analyzers, each examining a different dimension of the experience:

Workflow Documentor (Report 01) โ€” Maps every user workflow from the codebase into Mermaid diagrams. Entry points, decision nodes, error paths, dead ends. Identifies the critical path โ€” the shortest route from "I found this product" to "I produced real value." Flags every workflow that exceeds three steps.

Persona Modeler (Report 02) โ€” Derives user personas from code evidence, not marketing fiction. A CLI with no help text serves experts. A guided wizard serves beginners. The code reveals its users through the decisions it makes. Each persona gets a journey map showing where their mental model matches or clashes with the product.

Onboarding Auditor (Report 03) โ€” Walks the actual first-use path step by step. Counts every action, estimates every delay, identifies every friction point. Scores Time to First Value against the Three-Step Principle: any user should go from zero to productive in three steps or fewer. Then proposes a Three-Step redesign for every flow that exceeds the target.

Content Auditor (Report 04) โ€” Inventories every piece of content the user encounters โ€” README, docs, API responses, error messages, CLI help, config comments โ€” and checks for terminology consistency, mental model alignment, tone coherence, content gaps, and staleness. The most common finding: the same concept uses different words on different surfaces.

UX-Code Bridge (Report 05) โ€” This is the unique report. It maps three categories of misalignment: hidden capabilities (code supports it, content doesn't mention it โ€” these are quick wins), broken promises (content claims it, code doesn't deliver โ€” these are trust destroyers), and experience gaps (code and content agree, but the workflow is painful). No other analysis tool shows this three-way gap.

Analytics Architect (Report 06) โ€” Designs the measurement layer: success events, failure points, instrumentation plan, dashboard specifications, and alert thresholds. Without measurement, experience improvements are guesses. This report tells you what to track and where to put the tracking code.

Headless UX Advisor (Report 07) โ€” For products without a traditional UI. Evaluates the API as an interface, the CLI as a workflow, error responses as feedback, and documentation as onboarding. Identifies extension points where UI can be added later without restructuring. Recommends headless UX patterns: smart defaults, progressive disclosure, error-as-teacher.

The Experience Readiness scorecard

After all seven analyses complete, the orchestrator scores each dimension against defined thresholds:

DimensionThreshold
First Contact Clarityโ‰ฅ 80/100
Time to First Valueโ‰ค 3 steps
Workflow Completenessโ‰ฅ 75/100
Content Coherenceโ‰ฅ 75/100
Persona Alignmentโ‰ฅ 70/100
Measurement Readinessโ‰ฅ 60/100
Headless UX Coherenceโ‰ฅ 70/100

Four verdict levels: Experience Ready (all pass), Experience Gaps (1-2 below threshold), Experience Debt (3+ below), or Experience Blocking (first contact or TTFV fails โ€” users can't even get started).

The Three-Step Principle

The design principle threaded through every analyzer: any user should go from zero to productive in three steps or fewer. This applies recursively โ€” first use is three steps, each feature is three steps, error recovery is three steps.

If a workflow requires more than three steps, it should be automated (reduce steps), chunked (break into sub-workflows of three or fewer), or redesigned (progressive disclosure, smart defaults, elimination of unnecessary decisions).

The onboarding auditor enforces this most directly, but every analyzer references it. The persona modeler checks whether three steps is realistic for each persona. The content auditor checks whether the docs guide users through three-step paths. The headless UX advisor recommends patterns that achieve three-step flows without a UI.

The delta cycle

Like the code audit, this pipeline supports re-assessment. Run it once, implement improvements, run it again. The delta report tracks score changes per dimension, TTFV step count changes, workflow coverage delta, content contradiction resolution rate, and the net trajectory toward Experience Readiness.

When to run it

Before launch. If the experience isn't ready, the product isn't ready โ€” regardless of what the code does.

After major feature additions. New features create new workflows. New workflows create new onboarding paths. New paths create new friction.

When users aren't converting. If signups happen but activation doesn't, the experience is the bottleneck. This audit finds the specific friction points.

When building headless. If your product has no UI, the experience still matters โ€” it just lives in different surfaces. The headless UX advisor is built specifically for this.


Resources

Pipeline reference: /ux-content-audit โ€” 11 reports, 7 analyzers, Experience Readiness definition.

Key skills in this pipeline:

Related reading:

Download: Full toolkit (252KB) โ€” all 16 commands, all 11 skills, installs in 30 seconds.


Part of the Claude Skills Library.

Share
๐• Post
05. Measuring whether your UX actually works ยท scsiwyg