Design as a First-Class Concern
AI-DLC now includes a design direction system and persistent project knowledge layer, making visual design and domain understanding part of the methodology — not an afterthought.
The AI-Driven Development Lifecycle — from idea to production.
Scroll down. The story starts with the characters. Purple sections expand for deeper reference material.
Before the story begins, let’s meet the players. Every character has a role. Nobody works alone.
You provide the vision and make key decisions.
The AI you're talking to right now. One agent, many roles.
When it’s time to build, Claude spawns fresh specialist agents — each wearing a different “hat” that defines their role. A hat is a set of injected instructions that tells the agent how to behave, what gates to pass, and when to hand off.
A hat is a set of injected instructions that tells a fresh AI agent how to behave. Here’s what actually happens:
Think of it like hiring a contractor: you bring someone in (fresh agent), hand them a job description (hat instructions), and they do exactly that job. When the job’s done, they leave. The next job gets a new person with a new job description.
Each hat is a markdown file (plugin/hats/{hat-name}.md) that defines:
/ai-dlc:advance or /ai-dlc:failEach hatted agent starts with a clean context window. This means:
Reads the criteria, checks for blockers, creates a tactical plan for this iteration.
Implements code incrementally, runs quality gates after every change, fixes what breaks.
Verifies every success criterion with evidence, checks code quality, approves or sends back.
Reads the criteria, checks for blockers, creates a tactical plan.
Implements code incrementally, runs quality gates.
Attacks the code: tests for injection, auth bypass, data exposure.
Fixes what Red Team found: patches root causes, adds security tests.
Verifies every success criterion with evidence.
Writes ONE failing test for ONE behavior. The test MUST fail.
Writes the minimum code to make the test pass. Nothing more.
Cleans up the code without changing behavior. Runs tests after every change.
Verifies every success criterion with evidence.
Reads the criteria, checks for blockers, creates a tactical plan.
Guided by the project's design direction and blueprint. Produces wireframes, tokens, and component specs -- not production code.
Verifies every success criterion with evidence.
Reproduces the bug, captures errors, logs, timeline. Reports facts only.
Generates 3+ theories about the cause.
Tests hypotheses one at a time. Isolates variables.
Confirms root cause, designs minimal fix, adds regression test.
One-shot subagents during elaboration only.
Spawned once after ALL units are done.
Automated hooks (shell scripts) that run silently.
The hatted agents are all Claude -- fresh instances with clean context, each focused on one job for one unit.
Every feature follows the same rhythm. Four steps. Then repeat.
For cross-functional teams, passes let design, product, and dev each run this loop independently -- the output of one becomes the input to the next.
Most features follow this full cycle. But there are shortcuts:
/ai-dlc:quickSkip everything for tiny fixes -- typos, config changes, one-liners/ai-dlc:autopilotAI handles the whole cycle autonomously for well-understood featuresThe difference between hoping for the best and knowing what done looks like.
This is why AI-DLC spends so much time on elaboration. The planning phase isn’t overhead -- it’s the thing that makes everything else work.
Good criteria are the ones an AI can check without asking you.
This is the most important part. Good planning means the AI can build autonomously. Bad planning means it keeps asking you questions.
What follows is the actual conversation flow. Blue bubbles on the left are you. Gold bubbles on the right are the AI. Gray bubbles in the center are system events.
This is automatic. You wait. The AI reads your codebase -- file structure, database schemas, API endpoints, existing patterns -- and writes its findings to a discovery document.
New projects only: A visual design direction picker guides you through choosing an aesthetic -- Brutalist, Editorial, Dense, or Playful -- with tunable parameters. The selection produces a design blueprint that shapes every wireframe and UI component downstream.
tests/auth/ passUnits 2 and 3 can run in parallel once Unit 1 is done. Unit 4 waits for both.
ai-dlc/{slug}/main branch. The plan is saved. Time to build.That’s it. Planning is done.
Nine exchanges. Maybe ten minutes of your time. The AI now has everything it needs to work autonomously. You can step away. You can watch. Either way, the building starts now.
Now the AI works. You typed /ai-dlc:execute. Three loops nest inside each other, from big to small.
Units flow through a pipeline. Independent units can build in parallel. Each one unlocks the next.
The dependency graph (DAG) determines the order. Unit 1 has no dependencies, so it starts first. Once Unit 1 completes, Units 2 and 3 can build in parallel. Unit 4 waits for both 2 and 3.
Inside each unit, the AI cycles through hats. Each hat has one job. Quality gates stand between them — every time the Builder finishes a session, it must pass the gates before stopping.
The gates were detected from your repo tooling during /ai-dlc:elaborate and saved to intent.md frontmatter. The harness reads them and runs each command synchronously — the agent literally cannot stop until all pass. Builders can add unit-specific gates but cannot remove existing ones.
The unit is marked complete. Its branch is merged. Claude picks the next ready unit from the dependency graph.
The Reviewer writes specific, actionable feedback. “Test for expired tokens is missing” not “needs more tests.” The Builder reads this feedback and iterates.
When a design pass is active, the same rotation applies -- but the Builder becomes a Designer. Instead of production code, it produces wireframes, design tokens, and component specs. The design blueprint from elaboration feeds into every artifact.
This is the clever part. AI agents have limited memory -- called a “context window.” When the memory fills up, a normal AI would forget everything. AI-DLC solves this by saving progress to files on disk. When a new session starts, the AI reads those files and picks up exactly where it left off. This is called a bolt -- one focused work session.
enforce-iteration.sh fires when a session ends. It checks what work remains. If units are still in progress, it tells the next session to call /ai-dlc:execute to continue. The AI never “forgets” mid-task.The assembly line picks the next unit. The hat rotation builds that unit through plan, build, review cycles. And within each hat’s work session, the bolt ensures that even if the AI’s memory runs out, progress is never lost.
A single feature might involve 4 units, each going through 2-3 hat rotations, each rotation spanning 1-3 bolts. That’s potentially hours of autonomous building -- all from one planning conversation.
Two systems keep everything on track: quality gates that enforce standards, and hooks that run silently in the background.
Every time the AI tries to stop, it must pass through a gate. Quality gates are harness-enforced — defined in frontmatter, run by hooks, no exceptions.
Enforced by quality-gate.sh on every Stop — the agent cannot bypass these.
Quality gates aren't hardcoded. They're discovered from your repo, written into version-controlled frontmatter, and enforced mechanically by the harness — all without any manual configuration.
/ai-dlc:elaborateThe elaborate-discover skill scans your repository for tooling and proposes the right quality gate commands. You confirm or customize them.
package.json→ npm test, npm run typecheck, npm run lintbun.lockb / bun.lock→ bun test (overrides npm)go.mod→ go test ./..., go vet ./...pyproject.toml→ pytest, mypy .Cargo.toml→ cargo test, cargo clippyMakefile→ make test (if target exists)Confirmed gates are saved to intent.md. Builders can add unit-specific gates during construction — but never remove existing ones (the ratchet rule).
---
title: Add auth middleware
quality_gates:
- name: tests
command: npm test
- name: typecheck
command: npm run typecheck
- name: lint
command: npm run lint
------
title: Implement JWT validation
quality_gates:
- name: auth-integration
command: npm test -- --grep auth
---Intent gates + unit gates are merged additively. All run on every Stop during this unit.
Whenever a Builder, Implementer, or Refactorer tries to stop, quality-gate.sh fires synchronously. It reads quality_gates: from intent and unit frontmatter, runs each command, and blocks the stop if any fail.
Planner, Reviewer, and Designer hats skip gate enforcement — they're not writing code. Only building hats (Builder, Implementer, Refactorer) are enforced.
Three hard gates stand between hats. The harness enforces them mechanically — the agent cannot bypass them:
Planner must save a complete implementation plan before the Builder can start.
“No building without a blueprint.”
Harness-enforced: quality_gates: in frontmatter are executed by quality-gate.sh on every Stop. The agent cannot advance until all gates pass.
“No review of broken code.”
Every success criterion verified with concrete evidence before marking the unit done.
“No shipping without proof.”
Hooks are automated scripts that fire at specific moments during a session. They run silently. You never see them. But they keep everything honest.
AI is powerful but imperfect. Without guardrails, an AI might skip tests, write code outside its current task, or lose track of progress when a session ends. These hooks and gates create a safety net -- not to slow things down, but to keep the AI honest and productive. Think of them as the rules of the road that make autonomous driving possible.
All units are done. The code is written. The tests pass. But the story isn’t over yet. Four stages remain.
All the individual pieces are done. Now they need to work together.
The merged code is validated as a whole. Cross-unit tests run. Does the callback handler actually connect to the session manager? Does the login UI correctly call the callback endpoint? If issues are found, specific units get sent back for rework -- not the entire feature.
Creating a pull request now...
The code is deployed. Now it needs to be operated.
Operations aren’t an afterthought -- they’re file-based specs created during execution, living alongside the code they support.
This is what separates AI-DLC from “just running an AI.” The system learns from every cycle.
eslint-plugin-security to quality gatesWhy reflection matters: Each cycle makes the next one better. The AI remembers what worked, what didn’t, and adapts. The first feature takes the longest. Every subsequent one is faster and smoother.
Every command you need, organized by when you’ll reach for it
.ai-dlc/settings.yml with your preferencesadd, list, review, promoteThe main entry point: define intent, explore domain, decompose into units, set success criteria.
Picks up units from the DAG, spawns hatted agents, iterates until done.
Preserves frontmatter and state, re-queues affected units.
Typos, config tweaks, import fixes. No state files, no subagents.
Produces reflection.md with what worked and settings-recommendations.md with improvements.
List, execute, deploy, monitor, and teardown operational tasks.
Creates a new intent that builds on a previous one, carrying forward all context.
Analyzes git history and intent state, writes structured learning files.
These fire behind the scenes -- you never call them directly:
Every completed intent makes the next one better. Reflections feed into seeds. Seeds surface during elaboration. Compound learnings inform future builders. The methodology improves itself -- and your codebase -- with every cycle.
/plugin marketplace add thebushidocollective/ai-dlc/plugin install ai-dlc@thebushidocollective-ai-dlc --scope projectAI-DLC now includes a design direction system and persistent project knowledge layer, making visual design and domain understanding part of the methodology — not an afterthought.
AI-DLC gains first-class integration with six design tools and a persistent knowledge system that gives agents institutional memory across features.
AI-DLC now supports typed iteration through disciplinary lenses — design, product, dev — where each pass shapes how hats behave and which workflows are available.