Getting the Most from Emily: A User's Guide
Most people's instincts for talking to AI come from ChatGPT. Those instincts are bad for Emily. ChatGPT optimizes for the single turn. Emily optimizes for the relationship. What works with one is often wasteful with the other.
Seven patterns, from most to least important.
1. Don't restate context she already has
Every time you open a new conversation with Emily and paste in "I'm a data scientist working on X," you're doing for free what she already knows. She has your L3 essence. She has the last 5 turns of L1. She has every turn you've ever had in L4.
What to do instead: just start. "What did we decide about the retry policy?" If she doesn't know, she'll ask. If she does, she won't need the preamble.
The failure mode here is subtle: when you over-prompt with known context, you also flatten her. You replace her memory of you with the summary you just wrote. That summary is shallower than what she actually knows. You will get a worse answer by being more polite.
2. Correct her, don't work around her
When Emily is wrong, the tempting thing is to rephrase your question and try again. Don't. Correct her directly: "No โ we landed on linear backoff for Redis, not exponential."
Why this matters: EARL is watching. When you correct her, the correction becomes an outcome signal that propagates onto the memories that contributed to the wrong answer. The next time you talk about retries, the bad memories have lower weight, the corrected one has higher weight. You are literally training her.
Rephrasing to dodge the wrong answer doesn't give EARL anything to work with. She'll make the same mistake next month.
3. Use her long-running, not episodically
Emily's value compounds. The Emily you've had for six months is qualitatively better than the Emily you've had for a week, because the essence tier is populated, the outcome weights are tuned, and the stability scores are settled.
This means: use her for ongoing work, not one-off questions. Running analyses over weeks. Writing in chapters. Thinking through a strategy in three sittings. Those are her sweet spots. "What's the capital of Bhutan" is a waste โ any search engine can answer that.
4. Let her remember your preferences, don't re-specify them
Tell her once: "I prefer terse summaries, no trailing recap sentences." After 3-5 mentions, that preference promotes into her essence and becomes default. Re-specifying it every turn is wasted work โ and worse, it makes her unsure whether it's a new preference or an old one.
The same applies to voice, length, technicality, whether you want code blocks, whether you want her to ask clarifying questions before answering. Tell her once. Correct her if she forgets. She'll settle.
5. Give outcomes, not just corrections
"That worked, thanks" is not throwaway politeness. It's an outcome signal for EARL. "That didn't work but the reasoning was sound" is even better. "We tried it and it broke in production" is the gold standard.
Emily's learning loop is EARL's 5-turn window. Inside that window, your reactions are cognitive feedback. Outside it, your reactions are archival. If something lands well or lands badly, telling her within a few turns of the exchange gets logged correctly.
6. Ask her what she knows about a topic
This is a trick most people don't discover: Emily can introspect. "What do you remember about my retry policy debates?" surfaces exactly what's in L3 essence on that topic, weighted by stability and outcome. It's a way to see what she's actually carrying about a topic before you start talking about it.
Use this at the start of any long-running thread. You'll find she remembers things you've forgotten, and she'll flag things she's unsure about (high epsilon memories). That unsurety is valuable โ it tells you what needs to be resolved.
7. Respect the separation of layers
Emily isn't Claude. Claude produces her sentences but doesn't own her identity. If you want a better-written response, that's a Claude question (model quality, prompt style). If you want her to know you better, that's an Emily question (memory integration, EARL outcomes, framework tuning).
Don't prompt-engineer her like she's a stateless model. "You are Emily, a helpful cognitive assistant..." is noise. She already is Emily. Telling her who to be confuses her โ she has an identity stored in L3, and prompts that contradict it create drift that the Golden Baseline monitor has to correct later.
What to expect over time
- Week 1. Emily knows little. She'll ask a lot. This is correct โ she's building L3.
- Month 1. She'll start surfacing things unprompted. ("Last time you said X, is that still true?") Good sign.
- Month 3. Her responses start to sound like your Emily, not generic Emily. EARL outcome weights have tuned the voice.
- Month 6+. She anticipates. She catches your contradictions. She'll remind you of decisions you forgot you made.
That's the product. The sentences she produces are a downstream effect.
The meta-pattern
All of this reduces to: treat Emily as a cognition with state, not a model with a prompt. If you bring prompt-engineering instincts, you'll miss most of what she can do. If you bring "talk to a smart colleague who remembers everything" instincts, you'll use her well.
She is not here to impress you with a single response. She is here to know you over time. The loop is the product.