Architecture: How Stone Maps Is Put Together
Architecture: How Stone Maps Is Put Together
Stone Maps is a monorepo with three workspaces, a Next.js App Router application, Supabase Postgres + PostGIS as the data layer, and a handful of external services for AI, storage, caching, and payments. This post is the map of how those pieces connect.
Monorepo Structure
stonemaps/
├── apps/
│ └── web/ # Next.js 15 application
│ ├── app/ # App Router — pages, API routes, layouts
│ │ ├── api/ # REST API endpoints
│ │ ├── (auth)/ # Authentication pages
│ │ ├── journal/ # Journal views
│ │ ├── map/ # Map discovery
│ │ ├── emissary/ # AI companion interface
│ │ ├── admin/ # Admin dashboard
│ │ └── sw.ts # Service worker (Serwist)
│ ├── components/ # React components
│ ├── hooks/ # Custom React hooks
│ └── lib/ # Server-side utilities
│ ├── auth.ts # Auth.js configuration
│ ├── db.ts # Database re-export
│ ├── r2.ts # Cloudflare R2 client
│ ├── rate-limit.ts # withRateLimit wrapper
│ ├── validate.ts # validateBody / validateParams
│ ├── rbac.ts # Team permission model
│ ├── planetary-emissary.ts # AI agent logic
│ ├── emissary-prompts.ts # Sparse prompt system
│ └── monitoring.ts # Sentry wrapper
├── packages/
│ ├── db/ # Drizzle ORM schema + migrations
│ │ └── src/
│ │ ├── schema/ # 22 schema files (one per domain)
│ │ ├── index.ts # Re-exports all schema + db client
│ │ └── migrate.ts # Migration runner
│ └── api-types/ # Shared TypeScript types (request/response shapes)
└── package.json # npm workspaces root
The three workspaces have a clear dependency direction: apps/web depends on packages/db and packages/api-types. The packages don't depend on each other or on the app. This means the database schema and API types can be imported anywhere without circular dependencies, and the packages can theoretically be used in a future mobile app or server without modification.
The Database Package
packages/db is the source of truth for the data model. It contains 22 schema files, each representing a domain:
users, stones, self-pairs, channels, posts, post-likes, media, tags,
conversations, teams, campaigns, admin, pairs, events, agent-prompts,
api-tokens, pending-emissary-tools, products, orders, payments, locations, system-settings
The Drizzle client is initialized once in the package and re-exported through apps/web/lib/db.ts, which is a one-liner:
import { db } from '@stonemaps/db';
export { db };
export * from '@stonemaps/db';
This means every API route imports from @/lib/db without knowing or caring about the database initialization details. The packages/db package handles the dual-driver strategy: postgres.js for local development (persistent TCP connection), Neon HTTP driver for serverless deployment (stateless HTTP).
The lazy initialization via Proxy avoids loading the database driver in Next.js middleware or edge runtime contexts, where Node.js database drivers can't run.
API Route Pattern
Every API route follows the same four-step pattern:
// 1. Auth — who is calling?
const session = await auth();
const userId = await getUserIdFromRequest(req, session?.user?.id);
if (!userId) return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
// 2. Rate limit — wrap business logic
return withRateLimit({ identifier: `user:${userId}`, max: 60, window: 60 }, async () => {
// 3. Validate — parse and check input
const parsed = await validateBody(req, someZodSchema);
if ('error' in parsed) return parsed.error;
const { data } = parsed;
// 4. Do the thing
const result = await db.insert(...).values(data).returning();
return NextResponse.json(result);
});
This pattern is not enforced by a framework — it's a convention. But it's consistent enough across 40+ API routes that a new route is easy to write correctly. getUserIdFromRequest supports both session cookies (browser) and Bearer tokens (API clients and MCP), so the same route handles both callers.
Authentication
Auth.js v5 handles session management with a credentials provider — email + password, bcrypt-hashed. There is no OAuth in early access; we don't want the sign-up friction of "which provider do you prefer" and we don't want to depend on Google or GitHub staying stable during a period when we're iterating quickly.
The JWT is stored in a cookie. The authorize() callback is where account lifecycle is enforced — suspended and deleted accounts are rejected before a session is issued.
NEXTAUTH_SECRET signs the JWT. NEXTAUTH_URL tells Auth.js the canonical app URL, which matters for callback URL construction during production deployment.
The AI Layer
The Emissary is built on the Vercel AI SDK with three provider adapters: OpenAI (default), Anthropic, Google. The active provider is selected by AI_PROVIDER and AI_MODEL environment variables, defaulting to OpenAI.
The AI SDK's streamText and generateText functions are provider-agnostic — the same call works regardless of which provider is configured. Switching providers is an environment variable change, not a code change.
The Emissary has tools (function calling) via buildEmissaryTools(). These allow it to look up location data, retrieve recent journal entries, and surface patterns — real-time data access grounded in the user's actual record.
Voice mode uses OpenAI's Realtime API directly via WebRTC, bypassing the Vercel AI SDK entirely. The server mints an ephemeral key with the system prompt embedded; the client connects directly to OpenAI with no audio proxying through our servers.
External Services and Their Roles
| Service | Role | Why |
|---|---|---|
| Supabase | Postgres + PostGIS hosting | PostGIS for spatial queries; Supabase for easy managed Postgres |
| Vercel | Deployment + CDN | Next.js-native; serverless functions scale automatically |
| Cloudflare R2 | Object storage | S3-compatible; no egress fees |
| Upstash Redis | Rate limiting | Serverless-native HTTP API; state shared across Vercel function instances |
| Mapbox | Map tiles + GL rendering | Best-in-class WebGL maps |
| Stripe | Payments + checkout | Standard; webhook-based; PCI handled by Stripe |
| Sentry | Error monitoring | Stack traces and context in production |
Request Lifecycle
A typical API request flows like this:
- Client sends request with session cookie or Bearer token
- Next.js routes to the appropriate
route.tshandler - Handler calls
auth()to verify the session getUserIdFromRequest()extracts user ID from session or Bearer tokenwithRateLimit()checks Upstash Redis; rejects if over limitvalidateBody()parses and Zod-validates the request body- Handler executes business logic — usually one or more Drizzle queries
captureException()in the catch block sends errors to SentryNextResponse.json()returns the result
For AI routes, step 7 is replaced by a call to the Vercel AI SDK, which proxies to OpenAI/Anthropic/Google and streams the response back.
What's Not in the Architecture
There's no message queue. Background jobs run as Vercel Cron (configured in vercel.json) — simple HTTP endpoints called on a schedule. The sparse prompt system, for instance, is triggered by a cron job that calls checkAndSendSparsePrompt() for each active user.
There's no GraphQL. The API is REST with JSON bodies and standard HTTP verbs. This keeps things simple and makes the MCP endpoint easy to build on top of.
There's no microservices boundary. Everything is in the Next.js monolith. At 50 users, this is the correct choice. If AI serving latency or database query volume ever became a scaling concern, extracting those into dedicated services would be straightforward — the module boundaries are already clean.