Skip to content
scsiwyg
sign insign up
get startedmcpcommunityapiplaygroundswaggersign insign up
Worksona·Agentic Survey: AI-Simulated Research Panels in the Browser17 Apr 2026David Olsson
Worksona

Agentic Survey: AI-Simulated Research Panels in the Browser

#worksona#market-research#ai#simulation#browser-native

David OlssonDavid Olsson

Agentic Survey is a browser-native React 18 single-page application that simulates market research panels without a server, a build step, or a paid respondent panel. A researcher opens the app, defines survey questions, selects from seven built-in persona archetypes, sets a respondent pool of up to 100 simulated participants, and runs the simulation. Each simulated respondent is answered by a discrete LLM call, with the persona definition injected into the system prompt so the model role-plays a coherent point of view throughout the survey. Multi-provider support covers Anthropic, OpenAI, and Google. Completed responses accumulate in IndexedDB and are visualized with Chart.js. Results export to JSON for downstream processing.

The architecture has no moving parts outside the browser. There is no database server, no authentication layer, and no deployment pipeline. The application runs from a static HTML file.

Why is it useful?

Conventional survey panels take days to field and cost money per response. Both constraints discourage iteration. Researchers frequently send surveys with ambiguous questions, poorly ordered items, or untested scale anchors because the cost of discovering those problems in a real panel is too high to run multiple pilots.

Simulated panels shift the cost structure. A researcher can run a 20-question survey against 50 synthetic respondents in the time it takes to make coffee. When a question produces uniform answers across all persona types, that is a signal the question is leading or too vague. When a question produces high variance by archetype, that is a signal the construct is genuinely contested and worth keeping.

Because each respondent carries a distinct trait profile fed verbatim to the LLM, the synthetic response distribution reflects plausible population variance rather than a single model opinion. The skeptical procurement professional responds differently from the early-adopter enthusiast, and both differ from the cost-neutral end user. That structured diversity is the research value.

The zero-server constraint matters for a specific user: the researcher running a study under data-sensitivity constraints who cannot put responses through a cloud service. Agentic Survey runs fully offline once the static files are loaded.

How and where does it apply?

The primary use case is survey instrument refinement before committing to real fieldwork. A researcher drafts a survey, runs it against 50 synthetic respondents across multiple archetypes, reviews the distribution and open-text responses, and revises. After two or three simulation cycles the instrument is tighter, the scales are calibrated, and the ambiguous items are gone.

A secondary use case is synthetic training data generation. The structured JSON export, one record per respondent per question, is a labeled dataset a downstream classifier can train on. Because the persona definitions are explicit and versioned, the data provenance is traceable.

Agentic Survey connects directly to the Panterra survey toolchain. The JSON export format is compatible with Panterra Survey 10 for quota-managed administration and with the Panterra Analyzer for cross-tab exploration.

The persona configuration that drives each LLM call is a plain JSON object. The traits array is concatenated into the system prompt alongside the systemPrompt field, giving the model both structured labels and a narrative anchor.

{
  "persona": "skeptical_professional",
  "traits": ["analytical", "cost-conscious", "low-trust-in-vendors"],
  "llmConfig": {
    "provider": "anthropic",
    "model": "claude-opus-4-6",
    "systemPrompt": "You are a skeptical B2B buyer with 15 years in procurement..."
  }
}

The seven built-in archetypes cover a range from early adopter to late majority, and from high trust to low trust in institutional sources. Researchers can extend the archetype library by adding new persona JSON objects without modifying application code.

Share
𝕏 Post