Why Prompts Fail

Most "AI coding prompts" fail for a simple reason: they ask for a feature without supplying the system the feature lives in. Claude can write brilliant React, but if it doesn't know your schema, your design tokens, your auth model, your deployment target, or your tolerance for dependencies — it has to guess. Guesses compound. Entropy wins.

A Master Prompt solves this by front-loading every decision so the model never has to guess.

What a Master Prompt Actually Is

A Master Prompt is a single document — usually 8,000 to 15,000 words — that serves as the source of truth for an entire product. Not a marketing brief. Not a user story. A full engineering specification. Every time I start a new project (BagTracker, TableSharp, PowderLedger), I write one of these before a single line of code.

It has seven required sections:

  1. Product vision — who this is for, the problem, the success metric
  2. Feature spec — every screen, every action, every state
  3. Data model — complete Prisma schema with relationships and indexes
  4. API surface — route paths, request/response shapes, auth requirements
  5. Design system — colors as CSS variables, typography scale, spacing tokens, radii
  6. Deployment — platform, env vars, runtime, CI/CD expectations
  7. Constraints — what NOT to build, what NOT to install, what NOT to refactor

That last section is the one most people skip, and it's the most valuable.

A Real Example

Here's a fragment from the BagTracker Master Prompt — the schema section:

model Shift {
  id          String   @id @default(cuid())
  userId      String
  startedAt   DateTime
  endedAt     DateTime?
  cashTips    Int      // cents
  cardTips    Int      // cents
  hourlyBase  Int      // cents
  venue       String?
  notes       String?
  createdAt   DateTime @default(now())
 
  user        User     @relation(fields: [userId], references: [id])
 
  @@index([userId, startedAt])
}

That 15-line block tells Claude everything it needs to write the shift logger, the tax estimator, the analytics view, and the CSV export. Because the schema is decided, the agent doesn't spin cycles on "should tips be a Decimal or an Int?" — a question it can't answer without product context it doesn't have.

The Constraints Section

This is where most AI-built projects drift. Every model has a bias toward adding — another dependency, another abstraction, another "helper." Left unconstrained, it will install four date libraries and three form libraries across a project.

The constraints section is an explicit list of "no":

  • No date library. Use native Intl.DateTimeFormat for formatting and Date arithmetic for deltas.
  • No global state library. Use React Server Components + useState for client islands.
  • No component library. Tailwind + the tokens defined in §5 only.
  • No retries, no fallbacks, no circuit breakers in the initial build.

Every one of those is a decision I've already made. Writing them down once saves me from re-litigating them in every session.

How It Changes the Workflow

With a Master Prompt in place, a coding session looks different:

  1. I open Claude Code and paste the relevant sections of the Master Prompt.
  2. I ask for a specific slice: "implement the shift logger form and the /api/shifts POST route per §2.3 and §4.1."
  3. Claude generates implementation across multiple files, referencing the shared schema and tokens automatically.
  4. I review, push, and move on.

This is how two-week SaaS launches happen. The AI isn't faster than me — it's me who got faster, because I stopped making the same ten decisions at the start of every feature.

Common Mistakes

  • Too vague. "A fintech app for tipped workers" is a tagline, not a prompt. Describe the first screen in concrete nouns.
  • No schema. If you don't write the Prisma models, the model will invent them — and they will be wrong.
  • No design tokens. Without CSS variables for color and spacing, every generated component will use a slightly different palette.
  • Over-scoped. A Master Prompt should cover one product. If you find yourself writing "depending on the tenant," split it into two.
  • No version. Treat the Master Prompt like code. Commit it. Update it when decisions change. Stale prompts produce stale code.

Prompt Engineering = Product Engineering

The "prompt engineering" branding makes this sound like a trick. It isn't. Writing a Master Prompt is writing a product specification — it just happens to be the specification a model can also execute against.

If you can't write a Master Prompt, you can't ship a product. The model isn't hiding the work. It's exposing the work you were already supposed to do.


Want help writing a Master Prompt for your next build? See how I work.