What Person-Centric Research Actually Looks Like in Code
#engineering#AI#research design#OAIRA#architecture
David OlssonIn our previous post, we argued that traditional market research built its entire architecture around the wrong atom โ the question, not the person.
That's a philosophical claim. But philosophy without implementation is just copy. So here's where that principle actually shows up in OAIRA's code, UX, and AI patterns.
1. The Trait Vector: A Person as a Data Structure
The most literal expression of person-centric research is PersonaTraitVector โ the 8-dimensional representation of a person that sits at the core of OAIRA's simulation engine.
type PersonaTraitVector = {
expertise: number; // 0โ1: domain knowledge depth
satisfaction: number; // 0โ1: current product/situation sentiment
engagement: number; // 0โ1: willingness to elaborate
techSavviness: number; // 0โ1: comfort with technology
priceSensitivity: number; // 0โ1: cost-consciousness
riskTolerance: number; // 0โ1: appetite for change/novelty
loyalty: number; // 0โ1: attachment to current solution
optimism: number; // 0โ1: general outlook
};
This vector is extracted deterministically from a persona profile โ no LLM call, no guesswork. The extractTraitsFromProfile() function reads a person's demographics, psychographics, and response style, and maps them to these dimensions using rule tables and fuzzy lookups:
// A startup founder in a B2B SaaS company with "frustrated" in their pain points
// will resolve to something like:
{
expertise: 0.85, // senior-level lookup in EXPERIENCE_MAP
satisfaction: 0.3, // "frustrated" hits SATISFACTION_NEGATIVE_KEYWORDS
priceSensitivity: 0.85 // "startup" maps high in COMPANY_SIZE_PRICE_SENSITIVITY
riskTolerance: 0.65, // "disruptive" in psychographics pushes this up
...
}
This is what it means to place the person in a researchable world. Before a single question is asked, the system has a coherent internal model of who this person is and how they're likely to behave. The questions come after.
2. Response Generation: Traits Shape Answers, Not the Other Way Around
In traditional survey analysis, you collect answers and then infer what kind of person gave them. OAIRA inverts this.
When the statistical simulator generates a response for a synthetic persona, it doesn't pick from a random distribution. It samples from a distribution shaped by the person's trait vector:
function effectiveMean(session: AgentSession, correlation: TraitCorrelation): number {
const traitValue = session.traits[correlation.trait];
const predicted = traitValue * correlation.strength + (1 - correlation.strength) * 0.5;
// Blend with cumulative session sentiment โ answers stay internally consistent
if (session.sentimentCount > 0) {
return 0.8 * predicted + 0.2 * session.cumulativeSentiment;
}
return predicted;
}
The sessionBias and cumulativeSentiment fields mean that a persona who rated satisfaction low early in the survey will continue to express mild negativity through the rest of it โ just like a real person would. Internal consistency isn't enforced by a rule; it emerges from a state machine that models the person's evolving position across the conversation.
The final rating is then sampled from a Beta distribution (bounded, skewable) centred on that predicted mean โ producing realistic, non-uniform response patterns at scale. Up to 1,000 personas, sub-second, near-zero cost.
3. The Interview Coverage System: Understanding, Not Completion
Most survey platforms track completion โ did the respondent answer every question? OAIRA's interview system tracks something different: depth of understanding.
Each question in an AI interview has a CoverageEntry:
type CoverageEntry = {
covered: boolean;
depth: 'none' | 'surface' | 'moderate' | 'deep';
turns: number; // how many conversation turns touched this topic
confidence: number; // 0โ1: how confident the AI is it understood the answer
evidence: string[]; // last 5 verbatim quotes from the respondent
};
Depth is a function of both turns and confidence:
function getDepthFromTurns(turns: number, confidence: number): CoverageDepth {
if (confidence >= 0.85 && turns >= 3) return 'deep';
if (confidence >= 0.6 && turns >= 2) return 'moderate';
if (confidence > 0 || turns >= 1) return 'surface';
return 'none';
}
The AI interviewer uses suggestNextQuestion() to decide what to probe next โ prioritising uncovered required topics, then optional ones, then anything with low confidence. The interview ends not when all questions have been asked, but when the person has been sufficiently understood.
This is a design decision that only makes sense if the person is the unit of research. Completion is a question-centric metric. Coverage is a person-centric one.
4. Seven Context-Native AI Agents
OAIRA doesn't have one AI assistant bolted onto the side. It has seven context-specific agents, each loaded lazily and dispatched through a single unified endpoint:
const lazyResolvers: Record<string, LazyResolver> = {
'pool-manager': () => import('./pool-manager.server'),
'report-builder': () => import('./report-builder.server'),
'survey-analytics': () => import('./survey-analytics.server'),
'org-analytics': () => import('./org-analytics.server'),
'simulation': () => import('./simulation.server'),
'research-designer': () => import('./research-designer.server'),
'survey-designer': () => import('./survey-designer.server'),
};
Each resolver builds its own system prompt and tool set from the current page context. The survey-designer agent knows which survey you're building and what methodology you've selected. The simulation agent has access to your persona pool, your study configuration, and your prior results. The research-designer agent sees your study brief, attached assets, and the checklist of decisions still to be made.
This is what "AI-ready architecture" means in practice: not a chat widget that can answer general questions, but a layer of intelligence that is always contextually aware of the specific object you're working on. The agents are participants in the research loop, not observers of it.
5. Methodology as a State Machine
OAIRA ships eight research methodologies (JTBD, Gap Analysis, Journey Mapping, Hypothesis Testing, and more), each implemented as a step engine with typed state:
// From the JTBD methodology definition
{
id: 'define_job',
title: 'Define the Core Job',
prompt: 'What job do customers hire your product to accomplish?',
helpText: 'Example: "Help me grow revenue faster" not "Use our analytics dashboard"',
expectedDataType: 'text',
}
The step engine is a state machine: getCurrentStep(), recordAnswer(), advance(). As a researcher fills it in, the system accumulates context about the research goal, then uses that context to generate a survey instrument โ questions, branching logic, analysis configuration โ that is specific to the methodology being run.
This matters because it means the system always knows why a question exists, not just what it asks. That intent is preserved through to analysis: a Gap Analysis question tagged importance gets paired with its satisfaction counterpart and scored using Ulwick's opportunity formula, not aggregated into a generic bar chart.
The UX surface for this is an AI chat interface. The infrastructure underneath it is typed, deterministic, and deeply methodology-aware. Both things can be true.
The Pattern Underneath All of This
Look across these five examples and the same pattern recurs:
- Model the person first โ traits, psychographics, internal state
- Place them in a structured world โ methodologies, coverage maps, session context
- Use traditional tools as instruments โ surveys, interviews, ratings, rankings
- Produce intelligence, not answers โ depth scores, opportunity matrices, cumulative sentiment
None of this required inventing new research methods. It required being precise about what the existing methods are actually for.
The infrastructure is AI-ready not because we added an LLM to a survey tool. It's AI-ready because the data model was designed around people โ and people are exactly what language models are good at reasoning about.
OAIRA is open to teams running product, marketing, and strategy research. The codebase is TypeScript/Next.js on Supabase, with Anthropic Claude handling AI inference.