Teams Guide · Gap 5
Building
with
Others
Shared context · PR reviews
Roles · Prompt libraries
Team conventions
Every principle in this series was written for a solo builder. But most meaningful software is built by teams — and AI-assisted development introduces new coordination challenges that teams haven't faced before.
When multiple people are all prompting the same codebase, without coordination, the result is architectural drift, inconsistent code, and PRs nobody fully understands. This guide prevents that.
More context loss on teams vs solo
67%
Of AI PRs have no review process
5
Core team conventions needed
The biggest problems rarely get solved by one person. This guide shows you how to build with others without losing coherence.
THE COMPLETE SERIES Included in The Complete Series ($29).

What changes when more than one person is prompting

Solo vibe coding has a natural brake: you wrote the last 200 lines, you remember the architectural decision from yesterday, you'll notice when the AI proposes something inconsistent. With a team, that brake disappears.

Developer A builds the auth module Monday with one pattern. Developer B, not knowing this, builds the data layer Tuesday with a different pattern. The AI helps each of them individually — but neither the AI nor either developer has the full picture. By Friday the codebase has three different error-handling styles, two approaches to database queries, and a PR that touches both modules and breaks both.

This is the team vibe coding failure mode. It's not about individual skill — it's about shared context and agreed conventions, which the AI cannot infer and nobody thought to document.

The 4 New Failure Modes
1. Context fragmentation — each developer's AI session knows a different slice of the codebase.

2. Convention drift — without a shared style, every AI output reflects the prompter's personal approach, not the team's.

3. Unreviewed AI code — teams that review human PRs carefully often rubber-stamp AI-generated code because "the AI wrote it."

4. Knowledge silos — if only one person understands a module the AI built, the team has a single point of failure.

The shared context document

The single most important team practice. Every developer pastes this at the start of every AI session. It ensures the AI understands the project the same way regardless of who is prompting. It lives in the repo as AI_CONTEXT.md and is updated whenever the architecture changes significantly.

AI_CONTEXT.md — Team Template
Project Overview
Project: [name and one-sentence description]
Stack: [frameworks, databases, key libraries]
Repo structure: [brief folder map — what lives where]
Architecture Decisions (do not change without team discussion)
Auth: [how auth works — library, session approach]
Data layer: [ORM/query approach, naming conventions]
Error handling: [the agreed pattern — how errors are caught, logged, surfaced]
State management: [approach for frontend state]
Coding Conventions
File naming: [e.g. kebab-case for files, PascalCase for components]
Component structure: [where components live, how they're organised]
API routes: [REST/RPC, naming pattern]
Tests: [test framework, where tests live, what must be tested]
What Not to Change
[List the modules/decisions that require full team review before AI-assisted changes]
Current Sprint Context
Working on: [current feature/area]
Recently changed: [what's changed in the last week — helps AI avoid stale assumptions]
Known issues: [bugs being tracked, areas to avoid touching]
How to use it
Paste the full AI_CONTEXT.md at the start of every AI session. Then add your specific task. The AI now knows the full project context before it writes a single line. Update the document whenever an architectural decision changes — treat it like a README that's written for the AI, not just for new developers.

Reviewing AI-generated pull requests

AI-generated code gets rubber-stamped in code review far more often than human-written code. The reasoning — "the AI wrote it so it's probably fine" — is exactly backwards. AI code needs more scrutiny in some areas, not less.

The AI PR Review Checklist

1
Does the reviewer understand every changed line?
If not, ask the author to explain it — in their own words, not by re-reading the AI's output. Understanding is the baseline for approval.
2
Does it follow the conventions in AI_CONTEXT.md?
Check error handling pattern, naming conventions, file structure. AI has no way to know these unless told — so check they were told and it followed them.
3
Are there new dependencies? Were they scrutinised?
AI frequently adds npm packages without asking. Each new dependency is a supply-chain risk. Every new package needs a deliberate yes from the team.
4
Has the security checklist been run on new inputs/endpoints?
Any PR that adds a new route, form input, or database query needs a quick safety pass before merge.
5
Has it been tested — not just "it works" but edge cases?
AI code often handles the happy path perfectly and fails on edge cases. Reviewer should ask: what happens with empty input? With a missing record? With a network failure?
The PR Description Standard
All PRs that include significant AI-generated code should note this in the description. Not as a stigma — as useful information for reviewers. A simple note like "Auth flow via Claude, reviewed for security" sets the right expectation for review depth.
Use AI to review AI
One of the best team practices: before opening a PR, ask the AI to review its own output for the issues most likely to slip through human review. Prompt: "Review this code for: security issues, convention inconsistencies vs [paste AI_CONTEXT], and unhandled edge cases." It's not perfect but it catches a surprising number of issues before they hit review.
PR DESCRIPTION TEMPLATE ## What this does [One paragraph — what problem it solves] ## How it was built [Human-written / AI-assisted / AI-generated + reviewed] ## What reviewers should focus on [Areas of complexity, architectural decisions made] ## Testing done [What was tested manually + automated tests added]

Team roles and ownership

On a vibe coding team, technical roles need to account for AI-specific responsibilities. The most important addition: someone owns the AI_CONTEXT.md and is the gatekeeper for architectural decisions the AI shouldn't make unilaterally.

RoleAI-specific responsibilityVeto power over
Tech Lead / Architect Owns AI_CONTEXT.md. Reviews all PRs that touch core architecture. Decides which modules can be AI-assisted and which need human-first implementation. Any AI-proposed architectural change. New dependencies. Changes to auth, data schema, or security-sensitive modules.
Senior Developer Maintains the team prompt library. Reviews AI PRs for convention adherence. Mentors juniors on how to prompt effectively and review critically. Convention violations. PRs where the author cannot explain the code.
Developer Uses AI_CONTEXT.md at the start of every session. Runs self-review checklist before opening PRs. Notes AI assistance in PR descriptions. Their own code — must understand and own everything they merge, regardless of who wrote it.
QA / Tester Tests AI-generated code with particular focus on edge cases and error states. Maintains a library of "AI common failure" test scenarios. Releases where AI-generated code hasn't passed edge case testing.

The shared prompt library

Every team accumulates great prompts that produce great results for their specific stack, conventions, and codebase. Without a system, those prompts live in one developer's brain or message history and get lost when they leave. A shared prompt library is infrastructure.

prompts/ context/ ai-context.md # shared context doc session-starter.txt # paste at start of each session scaffolding/ new-component.txt # creates a new component our way new-api-route.txt # creates a route with our patterns new-db-migration.txt # migration following our conventions review/ security-audit.txt # security review before merging convention-check.txt # checks against our patterns pr-review.txt # comprehensive PR review prompt debugging/ error-diagnosis.txt # structured debugging prompt performance-review.txt # finds performance issues
How to grow the library
Whenever a developer writes a prompt that produces an unusually good result, they commit it to the prompts/ folder with a comment explaining when to use it. In standups, a weekly "prompt of the week" — one minute, one prompt, why it worked — builds the library faster than any formal process.
The session starter pattern
The most impactful entry in any team's prompt library is the session starter: a template that combines the AI_CONTEXT.md with the developer's specific task and the current sprint context. When every session starts the same way, outputs become dramatically more consistent across the team.

The 5 most common team failures

Failure 1
No shared context — each session starts blind
Every developer prompts the AI without a shared project document. The AI builds each feature in isolation, accumulating inconsistencies nobody notices until refactoring.
Create AI_CONTEXT.md. Make pasting it mandatory. Review it weekly.
Failure 2
AI PRs merged without real review
"The AI generated it" becomes a reason to skip scrutiny rather than increase it. Security issues, convention violations, and unhandled errors slip into production.
Adopt the AI PR checklist. Nobody approves code they can't explain.
Failure 3
Architectural decisions made by the AI, not the team
A developer asks the AI to "build the auth system." It makes three architectural decisions the team never agreed to. Now the whole team is building around them.
Architecture decisions go through the tech lead before AI implementation. AI implements, it does not decide.
Failure 4
Knowledge siloing around AI-built modules
One developer uses AI to build a complex module quickly. Nobody else understands it. When that developer leaves or is unavailable, the module becomes untouchable.
Every module must have at least two people who can explain how it works. AI-built modules require a "knowledge transfer" session before they're considered complete.
Failure 5
Prompt quality varies wildly across the team
Senior developers get dramatically better AI outputs than juniors, not because they're smarter but because they've developed prompting skill through experience. This widens the contribution gap.
Build the shared prompt library. Run monthly "prompting" sessions where the team shares what's working. Make prompting a shared skill, not a personal one.