Worksona Studio: A Visual IDE for Multi-Agent Workflow Orchestration
#worksona#portfolio#visual-workflow#ai-orchestration#no-code#local-first
David OlssonMost multi-agent frameworks ask you to write code. LangChain, AutoGen, and their peers are powerful, but they put the workflow definition in a Python file that only developers can author, modify, or reason about. Worksona Studio takes a different position: the canvas is the code.
What It Is
Worksona Studio is a visual workflow IDE built with React and ReactFlow. Users construct multi-agent pipelines by dragging nodes onto a canvas and drawing connections between them. Each node is an autonomous agent: it has a configured LLM provider, a system prompt, a temperature setting, and a model selection. Directed edges define the data flow โ the output of one agent becomes the input to the next via a {{node_id.output}} template syntax.
File input nodes accept PDFs, DOCX files, spreadsheets, images, and source code. Output nodes display final results. The full graph executes sequentially, with a real-time display showing each node's status and result as it completes. Execution history is saved so any previous run can be replayed and inspected.
All state persists locally in browser IndexedDB via Dexie.js. No account is required. API keys are stored in the browser, never on a server.
Why It Matters
The canvas-as-specification property is the central innovation. In most visual tools, there is a visual representation and a separate code representation that can diverge. In Studio, the canvas is authoritative โ what you see on the canvas is exactly what executes. There is no hidden configuration layer.
Per-node provider selection compounds this. Rather than assigning one LLM to an entire workflow, Studio routes each node independently. A document extraction step can use a fast, inexpensive model. A synthesis or reasoning step can use a high-capability model. That cost-and-quality optimization happens at the node level, within a single execution session, without any code changes.
The AI assistant panel adds a layer that is genuinely unusual: natural language instructions that modify the workflow graph itself. Asking the assistant to "add a summarization step after the analysis node" results in a new node appearing on the canvas with an appropriate prompt and connection. The canvas becomes an AI-addressable data structure โ AI modifying the configuration of other AI agents.
Workflow sharing is designed with security as a constraint, not an afterthought. Before a workflow URL is generated, the system scans for embedded API keys. LZ-compression keeps URLs shareable. Netlify Blobs backs persistent shared workflow storage. Most visual sharing implementations skip the credential scan; Studio makes it mandatory.
How It Works
flowchart TD
A[User opens Studio in browser] --> B[Drag nodes onto ReactFlow canvas]
B --> C[Configure each node\nprovider ยท model ยท system prompt]
C --> D[Draw edges between nodes\ndefine data flow]
D --> E[Optional: use AI assistant\nto modify graph via natural language]
E --> F[Execute workflow]
F --> G[Sequential execution\nfollowing edge order]
G --> H[Node 1 runs โ result displayed]
H --> I[Node 2 runs with Node 1 output]
I --> J[Node N โ final output]
J --> K[Execution saved to IndexedDB]
K --> L[Share via LZ-compressed URL\nwith credential scan]
The underlying tech stack is React 18 with TypeScript, Vite for development, and Tailwind CSS for styling. Supported LLM providers include Anthropic (Claude Sonnet, Opus, Haiku), Google (Gemini Pro, Gemini Flash), and OpenAI (GPT-4o, GPT-4, GPT-3.5). Provider selection is per-node, per-execution.
Security measures beyond credential scanning include DOMPurify sanitization on all LLM output before it is rendered. A guided tour via driver.js handles onboarding for new users. The template library in the sidebar provides pre-built workflow starting points, reducing the setup time for common patterns to a single click.
Where It Fits in Worksona
Studio occupies the visual authoring position in the portfolio. Worksona API defines the agent programming model in code. Studio makes that same model accessible without writing code โ the same multi-agent orchestration concepts expressed as a canvas rather than a JavaScript module.
The evolutionary path is direct: an early node editor became a second iteration, then a workflow editor variant, and then Studio โ the most feature-complete visual workflow IDE in the portfolio. That progression represents accumulated learning about what visual agent authoring actually requires in practice.
For teams that want to prototype multi-agent pipelines quickly, evaluate different LLM providers on the same task, or share a workflow with a colleague who does not write code, Studio is the practical surface. For teams that need programmatic control, runtime integration, or a REST API, worksona-api is the appropriate layer.
The local-first constraint โ no account, no cloud sync, no data leaving the browser without explicit action โ is a deliberate design choice that applies across both projects. Studio extends it with URL-based sharing that does not require a user account or a managed backend.
Live: studio.worksona.io