Prompt IDE: A Dedicated Workspace for LLM Interaction
#worksona#portfolio#developer-tools#browser-ide#llm#prompt-engineering
David OlssonPrompt IDE is a browser-based workspace for structured LLM interaction. It provides a dual-panel editor layout — left panel for chat history and session navigation, right panel for AI responses — plus a console pane for application-wide event tracing, and a status bar showing model, token count, and connection state. All data is stored in IndexedDB. LLM settings persist in localStorage. There is no backend.
The editor surfaces are Quill rich-text instances, not plain textareas. Each panel supports inline HTML, autoscroll, and subtle background-color banding to visually separate turns in a conversation. Sessions are named, saved, and switchable without losing history.
Why a dedicated IDE for prompt work?
A general-purpose chat interface is designed for a single conversation thread. Prompt engineering work is not. It involves iterating on a prompt across multiple sessions, comparing outputs, preserving intermediate results, and tracing what happened when something goes wrong.
Prompt IDE treats a prompt session the way a code editor treats a file: it is a persistent, named artifact that you return to, revise, and review. The left panel sidebar lists sessions and history so navigation does not require scrolling through a single long thread. The right panel keeps the AI response visually separated from the input history, which matters when comparing outputs across runs.
The console pane captures all application events — UI interactions, LLM requests, responses, and errors — in a timestamped log with copy and clear controls. This is tracing for prompt work: when a model produces unexpected output, the console shows the exact request that was sent.
How the dual-panel architecture serves prompt engineering workflows
The left and right panels serve different phases of a session. The left panel (chatEditor) accumulates the conversation — what was asked, in what order, with what context. The right panel (responseEditor) holds the current AI response in isolation, where it can be read, copied, and annotated without the question context in the way.
Both panels are Quill editors, which means responses rendered as rich HTML are editable in place. A user can annotate a response, add notes, or restructure content directly in the panel without switching to another tool.
// Application module structure
// js/db.js — IndexedDB schema and read/write operations
// js/trace.js — Application-wide event tracer, writes to console pane
// js/ui.js — DOM binding: panels, toolbar buttons, settings panel
// js/main.js — LLM request dispatch, session lifecycle, Quill init
The settings panel — toggled from a gear icon in the top-right corner — accepts OpenAI and Anthropic API keys and a model selection covering both providers. Keys are written to localStorage on save and read from there on each LLM request.
Where it applies
Prompt IDE fits anyone who iterates seriously on prompts: developers building LLM-integrated features, analysts constructing research prompts, or anyone who needs more structure than a chat window provides.
The browser-native, zero-backend design means it deploys to any static host and runs with nothing more than a browser and API keys. Session history and trace logs stay local — no data is written to any external service beyond the LLM API calls themselves.