The idea came from a personal frustration: every time I launch anything — a course, a tool, a newsletter — I cobble together a Typeform and a spreadsheet to manage the waitlist. It's fine but it's friction. I wanted a purpose-built thing I could set up in 5 minutes.
Before opening Cursor, I did the Intent Worksheet. I want to share the actual answers because the specificity matters — and mine weren't specific enough on the first attempt.
I asked for the architecture before any code. The AI came back with two options: a full Next.js app with Supabase, or a simple HTML page + serverless functions. I almost chose option 2 — it seemed simpler. But when I asked "which would be harder to add authentication to later?" it became clear that the Next.js option was the right foundation even though it was more upfront complexity.
Session 2 was the most satisfying session of the whole project. I started with the prompt below and ran the code after each iteration. Three focused prompts, each building on the last, each tested before the next one was written.
The schema came back correct on the first attempt. The RLS policies took one correction — the AI used `auth.uid()` directly but forgot to handle the case where a user is not authenticated (public signup). Fixed with one follow-up prompt.
This worked on the second run. The first run had a TypeScript error — the AI had typed the Supabase response incorrectly. I pasted the error back, it fixed it immediately. This is the correct workflow: don't debug yourself, just paste the error.
I'm documenting this session in detail because it's the most instructive — and the most common failure mode in intermediate vibe coding projects. Auth is where things go wrong. Not because the AI is bad at auth, but because auth has many moving parts and AI often implements them inconsistently across files.
With auth properly implemented, Sessions 4 and 5 were fast and relatively clean. The admin dashboard — showing all subscribers, their signup dates, and a CSV export — took one focused 90-minute session.
The key difference in these sessions: I started each one with a context block summarising what existed, what we were building, and explicit constraints. The AI never had to guess what the project was. Outputs improved dramatically compared to the early sessions.
Before deploying, I ran the AI security audit prompt from the Safety Guide against every file that handles user data. It found two issues I had completely missed: