Skip to content
scsiwyg
sign insign up
get startedhow it worksmcpscsiblogcommunityapiplaygroundswaggersign insign up
โ† commit and pushยทBuilding the Worksona Claude Code Toolkit: 48 Reports, One Command8 Apr 2026David Olsson

Building the Worksona Claude Code Toolkit: 48 Reports, One Command

#claude-code#developer-tools#documentation#automation#code-audit#ai-tools#workflow

David OlssonDavid Olsson

โ†’ Visit the Worksona Claude Code Toolkit

The toolkit landing page showing "Document, audit, and understand any codebase" with dark-themed interface featuring 16 slash commands that generate 48 professional reports across 6 workstreams โ€” from project documentation to infrastructure costs to security audit to competitive MOAT analysis.

Building the Worksona Claude Code Toolkit: 48 Reports, One Command

When I started using Claude Code for development, I noticed a pattern: I kept asking it to generate the same types of documentation and audit reports for every project I touched. Project overviews. Infrastructure cost analysis. Security audits. Code quality reviews.

Each time, I'd craft a prompt, wait for the output, refine it, and save it somewhere. Then move to the next project and do it all over again with slight variations.

That repetition became the Worksona Claude Code Toolkit โ€” a suite of 16 slash commands that generate 48 professional reports across six automated workstreams.

How It Started: Repetition as Signal

The initial spark came from auditing scsiwyg itself. I needed to document the architecture for a technical specification, analyze infrastructure costs for scaling projections, and run a security audit before opening it to more users.

Instead of writing these documents manually (which would take days), I used Claude Code to generate them. The results were comprehensive โ€” sometimes too comprehensive. A single infrastructure audit generated seven interconnected reports: service topology, cost projections from MVP to 10K users, LOC-to-cost correlation, vendor-agnostic requirements specs, and more.

I realized three things:

  1. The prompts I was writing were reusable โ€” with the right structure, they worked across different stacks, languages, and project sizes
  2. The orchestration pattern was valuable โ€” launching multiple specialist agents in parallel produced better coverage than sequential analysis
  3. The output needed standardization โ€” numbered reports in predictable locations made them easy to commit, share, and reference

So I packaged them as Claude Code slash commands.

Why Consolidate? The Multi-Agent Orchestration Problem

Early versions were scattered individual skills. You'd run /security-audit-auth, then /security-audit-api, then /security-audit-infrastructure. Each one was useful, but stitching them together was manual work.

The breakthrough was building orchestrators โ€” meta-skills that coordinate multiple specialist agents in parallel and write structured reports to predictable locations.

For example, /code-audit launches five parallel auditors simultaneously:

  • Consistency auditor โ€” naming conventions, file organization, error handling patterns
  • Repetition detector โ€” DRY violations, copy-paste code, duplicated config values
  • Security auditor โ€” hardcoded secrets, input validation gaps, auth weaknesses
  • Pattern optimizer โ€” inappropriate abstractions, anti-patterns, type safety gaps
  • Auditability assessor โ€” whether the code can be understood by someone who didn't write it

Then a sixth agent (the audit grader) reads all five reports, assigns scores across five pillars, produces a prioritized remediation TODO list with effort estimates, and issues a "Good Standing" verdict.

All of that happens from one command: /code-audit. The output lands in docs/03-code-audit/ as eight numbered markdown files ready to commit.

Consolidating these workflows into orchestrated commands solved two problems:

  1. Cognitive load โ€” Instead of remembering 40+ individual commands, you remember 6-8 orchestrators
  2. Report quality โ€” Cross-referencing between parallel agents produces deeper analysis than isolated single-pass reports

The Six Workstreams

The toolkit organizes into six domains, each with its own orchestrator:

WorkstreamCommandOutput
Project Documentation/doc-suite-generator-v211 reports: project overview, technical spec, business benefits, innovation themes, features, extensibility, work zones, readiness assessment
Infrastructure & Cost/infrastructure-cost-audit7 reports: service topology, cost projections (MVP โ†’ 10K users), LOC-based valuation, vendor-agnostic requirements, recommended stack
Code Quality/code-audit8 reports: 5-pillar analysis, graded TODO list, activity log, delta tracking between audit cycles
Security/security-audit7 reports: auth, API/data, infrastructure, protocol (MCP/GraphQL/WebSocket), abuse prevention, OWASP compliance, privacy
UX & Positioning/ux-audit6 reports: content/copy quality, market positioning, UX patterns, conversion funnels, synthesis action plan, SEO audit
Competitive MOAT/moat-audit8 reports: MOAT definition, differentiator defensibility, four-tier competitive landscape, pressure vectors, inflection points, value planes, control planes

Plus two utility skills:

  • /landing-page โ€” generates marketing pages from codebases (the toolkit landing page was built with it)
  • /deploy โ€” handles Vercel and Netlify deployments with preview โ†’ production workflows

Total: 48 reports from 16 slash commands.

Real Output: What It Actually Generates

I ran the full pipeline against scsiwyg (29K LOC, 11 services) and got 229,000+ characters of documentation. Some highlights:

The infrastructure cost audit projected hosting costs at three scale points:

  • MVP (100 users): $47/month
  • Growth (1,000 users): $183/month
  • Scale (10,000 users): $847/month

It also valued the codebase using three methods (LOC-based, feature-based, complexity-weighted) and produced a vendor-agnostic requirements spec with zero product names โ€” just capabilities.

The security audit flagged specific files and line numbers:

  • Hardcoded API keys in test fixtures (flagged for removal)
  • Missing rate limiting on two public endpoints (high priority)
  • JWT expiration handling gaps (medium priority)

The code audit scored five pillars and issued a composite quality score with a "Good Standing" threshold. It generated a graded TODO list with effort estimates, then tracked deltas when I re-ran it after fixes.

How to Use It

Installation (30 seconds)

bash
unzip worksona-toolkit.zip
cp -r skills/commands/* ~/.claude/commands/
cp -r skills/skills/* ~/.claude/skills/

Requires Claude Code CLI with a Pro, Team, or Enterprise subscription.

Usage

Open Claude Code in any project and type a slash command:

> /doc-suite-generator-v2

Claude Code will generate 11 reports in docs/01-project-documentation/. Every report is plain markdown. Commit it, share it, or feed it to another tool.

Full Pipeline

Run all six workstreams for comprehensive analysis:

bash
/doc-suite-generator-v2      # Project documentation
/infrastructure-cost-audit   # Infrastructure & cost analysis
/code-audit                  # Code quality with remediation plan
/security-audit              # Security posture assessment
/ux-audit                    # UX and positioning review
/moat-audit                  # Competitive defensibility analysis

Each orchestrator writes reports to its own subdirectory:

  • docs/01-project-documentation/
  • docs/02-infrastructure-cost/
  • docs/03-code-audit/
  • docs/04-security-audit/
  • docs/05-ux-positioning/
  • docs/06-moat-audit/

A master docs/README.md index is auto-generated with cross-references.

How to Modify It for Your Own Use

Every skill is just a structured prompt file. The power is in the orchestration pattern and report structure, not magic.

Customize Existing Commands

Each command lives in skills/commands/[command-name]/SKILL.md. Open any of them and you'll see:

  • Trigger conditions โ€” what phrases activate the skill
  • Agent instructions โ€” the actual prompt given to Claude
  • Output specifications โ€” where files get written, what format to use

To modify a command:

  1. Edit the SKILL.md file
  2. Adjust the prompt, output paths, or report structure
  3. Copy it back to ~/.claude/commands/[command-name]/

For example, if you want /security-audit to check for your organization's specific compliance requirements, edit skills/commands/security-audit/SKILL.md and add those requirements to the prompt.

Add Your Own Commands

Create a new directory in ~/.claude/commands/your-command-name/ with a SKILL.md file following this structure:

markdown
# Your Command Name

Brief description of what it does.

## Instructions

[Your prompt for Claude Code goes here]

## Output

Reports should be written to:
- `docs/your-workstream/report-name.md`

Claude Code will automatically discover it. Type /your-command-name and it will run.

Build Your Own Orchestrator

The orchestrator pattern is the most powerful customization point. Study /code-audit/SKILL.md to see how it:

  1. Launches 5 specialist agents in parallel using the Task tool
  2. Collects their outputs
  3. Runs a synthesis agent (the grader) that reads all reports and produces cross-referenced analysis
  4. Writes everything to a structured directory

You can apply this pattern to any domain where multiple perspectives improve output quality.

Framework-Agnostic Detection

All skills auto-detect your stack from:

  • Package manifests (package.json, requirements.txt, Cargo.toml, etc.)
  • API route conventions (Next.js app/api/, Django urls.py, FastAPI decorators)
  • Database schemas (Prisma, SQLAlchemy, Diesel)
  • Infrastructure configs (Docker Compose, Kubernetes manifests, Terraform)

No stack-specific configuration required. Run the commands on Node, Python, Go, Rust, Ruby โ€” they adapt.

Download and Explore

The toolkit is free and portable. No SaaS, no subscription (beyond Claude Code itself), no vendor lock-in.

Download: https://worksona-claude-toolkit.netlify.app/
Documentation: Included in the zip as README.md, REPORT-INDEX.md, and SKILLS-AND-WORKFLOWS.md

Run it on your own projects to see what kind of reports it generates. If you use it, I'd love to know what you discover. And if you build your own commands on top of these patterns, even better.

What I Learned

  1. Good prompts take iteration โ€” The first versions of these commands were verbose and unfocused. After running them on 10+ projects, patterns emerged about what Claude Code needs to generate useful output.

  2. Parallel orchestration beats sequential โ€” Five security auditors running simultaneously produce richer findings than one auditor checking five things sequentially. The cross-referencing matters.

  3. Standardization enables workflows โ€” Numbered reports in predictable directories mean you can build tooling on top. Delta tracking, cross-report search, automated PR comments โ€” all possible because the structure is consistent.

  4. Markdown is underrated infrastructure โ€” Every report is plain text. No database, no API, no rendering engine. Commit them to git, grep through them, diff them, feed them to another AI agent. Simplicity compounds.

The Worksona Claude Code Toolkit is the documentation and audit workflow I wanted but couldn't find. Now it's available for anyone running Claude Code on any project.


Links:

Share
๐• Post