By The Bushido Collective

AI-DLC: How It Works

The AI-Driven Development Lifecycle — from idea to production.

Blue = Human actions
Gold = AI actions
Gray = System / automated
Purple = Deep Dive (reference)

Scroll down. The story starts with the characters. Purple sections expand for deeper reference material.

Prologue: Meet the Cast

Before the story begins, let’s meet the players. Every character has a role. Nobody works alone.

🧑

You (Human)

You provide the vision and make key decisions.

  • During planning: you answer questions and approve specs
  • During building: you watch, step away, or unblock
  • During reflection: you validate insights and choose next steps
🤖

Claude (Session Agent)

The AI you're talking to right now. One agent, many roles.

  • Elaborator during planning -- asks questions, explores your codebase, writes specs
  • Executor during building -- manages the unit queue, spawns hat agents, tracks progress
  • Analyst during reflection -- analyzes what happened, recommends improvements
  • Spawns fresh specialist agents for each unit of work
🎩

The Hatted Agents

When it’s time to build, Claude spawns fresh specialist agents — each wearing a different “hat” that defines their role. A hat is a set of injected instructions that tells the agent how to behave, what gates to pass, and when to hand off.

PlannerBuilderReviewerDesignerRed TeamBlue TeamTest WriterImplementerRefactorerObserverHypothesizerExperimenterAnalyst
See all workflows and hat details
How It Works

What is a “Hat”?

A hat is a set of injected instructions that tells a fresh AI agent how to behave. Here’s what actually happens:

1Claude spawns a fresh agent
Clean context, no prior baggage
2The system injects hat instructions
A markdown file that defines the role's behavior, rules, and quality gates
3The agent works according to its hat
It only knows how to be a Builder, or a Reviewer, etc.

Think of it like hiring a contractor: you bring someone in (fresh agent), hand them a job description (hat instructions), and they do exactly that job. When the job’s done, they leave. The next job gets a new person with a new job description.

What’s in a hat file?

Each hat is a markdown file (plugin/hats/{hat-name}.md) that defines:

  • What the agent MUST do (required steps)
  • What the agent MUST NOT do (boundaries)
  • Quality gates it must pass before finishing
  • When to call /ai-dlc:advance or /ai-dlc:fail
Why fresh agents?

Each hatted agent starts with a clean context window. This means:

  • No confusion from previous units or hats
  • Full context budget for the current task
  • Failures in one unit don’t bleed into another
Default Workflowmost common
📋
Planner

Reads the criteria, checks for blockers, creates a tactical plan for this iteration.

🔨
Builder

Implements code incrementally, runs quality gates after every change, fixes what breaks.

🔍
Reviewer

Verifies every success criterion with evidence, checks code quality, approves or sends back.

Adversarial Workflowsecurity-focused
📋
Planner

Reads the criteria, checks for blockers, creates a tactical plan.

🔨
Builder

Implements code incrementally, runs quality gates.

⚔️
Red Team

Attacks the code: tests for injection, auth bypass, data exposure.

🛡️
Blue Team

Fixes what Red Team found: patches root causes, adds security tests.

🔍
Reviewer

Verifies every success criterion with evidence.

TDD Workflowtest-driven
✍️
Test Writer

Writes ONE failing test for ONE behavior. The test MUST fail.

⚙️
Implementer

Writes the minimum code to make the test pass. Nothing more.

🧹
Refactorer

Cleans up the code without changing behavior. Runs tests after every change.

🔍
Reviewer

Verifies every success criterion with evidence.

Design WorkflowUI/UX
📋
Planner

Reads the criteria, checks for blockers, creates a tactical plan.

🎨
Designer

Guided by the project's design direction and blueprint. Produces wireframes, tokens, and component specs -- not production code.

🔍
Reviewer

Verifies every success criterion with evidence.

Hypothesis Workflowdebugging
👁️
Observer

Reproduces the bug, captures errors, logs, timeline. Reports facts only.

💡
Hypothesizer

Generates 3+ theories about the cause.

🧪
Experimenter

Tests hypotheses one at a time. Isolates variables.

📊
Analyst

Confirms root cause, designs minimal fix, adds regression test.

🔬

The Helpers

One-shot subagents during elaboration only.

  • Discovery Agent -- Explores codebase structure, APIs, schemas
  • Wireframe Agent -- Generates HTML mockups for UI units
  • Ticket Sync Agent -- Creates epics and tickets in your project tracker
  • Spec Reviewer -- Validates completeness and consistency of the spec

The Integrator

Spawned once after ALL units are done.

  • Validates everything works together on the merged branch
  • Runs the 10-step integration check
  • Reports ACCEPT or REJECT
⚙️

The System

Automated hooks (shell scripts) that run silently.

  • Saves progress so nothing is lost between sessions
  • Enforces quality gates, warns about context limits
  • Makes the whole thing resilient to context window resets

The hatted agents are all Claude -- fresh instances with clean context, each focused on one job for one unit.

Act 1: The Big Picture

Every feature follows the same rhythm. Four steps. Then repeat.

1
Plan Together
You + AI collaborate on what to build
2
Build & Verify
AI works autonomously -- you watch or step away
3
Deliver
AI creates a pull request -- you approve
4
Learn & Improve
AI analyzes what happened -- you validate insights
Repeat for the next feature
💬
Plan: You describe what you want. The AI asks questions until it truly understands.
🔨
Build: The AI writes code, runs tests, and reviews its own work -- all autonomously.
📦
Deliver: The AI packages everything into a pull request. You review and approve.
💡
Learn: The AI reflects on what went well and what to improve for next time.

For cross-functional teams, passes let design, product, and dev each run this loop independently -- the output of one becomes the input to the next.

Most features follow this full cycle. But there are shortcuts:

/ai-dlc:quickSkip everything for tiny fixes -- typos, config changes, one-liners
/ai-dlc:autopilotAI handles the whole cycle autonomously for well-understood features

Why Specs Matter

The difference between hoping for the best and knowing what done looks like.

🎲

Vibe Coding

the bad way
  • Human: “Build me a login page”
  • AI builds... something
  • Human: “No, not like that”
  • AI rebuilds... differently
  • Human: “Closer, but the styling is wrong”
  • Repeat forever
Problem: No definition of “done.” The AI guesses. You react. Progress is random.
🔀Tangled loop -- going in circles
📋

Spec-Driven Dev

better, but still a hand-off
  • Someone writes a detailed spec or PRD
  • Spec is handed to an AI or developer to implement
  • Implementer interprets the spec (fills in gaps with assumptions)
  • Reviewer finds mismatches between intent and implementation
  • Back-and-forth to close the gap between what was meant and what was built
Problem: The spec writer and the implementer have different mental models. The hand-off creates gaps that only surface during review.
↕️Spec → hand-off → interpretation gap
🎯

AI-DLC Elaboration

no hand-off — co-created
  • Human and AI co-create the spec through conversation — not a document tossed over a wall
  • AI explores your codebase, discovers the domain model, and asks clarifying questions — no interpretation gap
  • Success criteria are machine-verifiable — tests, types, performance thresholds the AI can check itself
  • The same system that helped define the spec also builds it — shared understanding, zero hand-off
  • Quality gates enforce the criteria automatically — work continues until they’re met
Result: No interpretation gap. The spec is a shared understanding, not a hand-off artifact. Clear criteria = autonomous AI.
Conversation → shared understanding → verified code
The AI-DLC insight: Autonomy is a function of criteria clarity.
Autonomy = f(Criteria Clarity)
  • Vague criteria (“make it look good”) → AI keeps asking you questions, can’t work alone
  • Clear criteria (“all tests pass, p95 < 200ms, WCAG AA contrast”) → AI iterates autonomously until done

This is why AI-DLC spends so much time on elaboration. The planning phase isn’t overhead -- it’s the thing that makes everything else work.

See the difference

Good criteria are the ones an AI can check without asking you.

❌ Bad criterion
"The login page should be fast"
What does 'fast' mean? The AI can't verify this.
✅ Good criterion
"Login API responds in <200ms at p95 under 100 concurrent users"
Measurable. The AI runs a load test and checks.
❌ Bad criterion
"The UI should look nice"
The AI has no idea what "nice" means to you.
✅ Good criterion
"All pages pass WCAG AA contrast (4.5:1 body, 3:1 large text), use design system tokens only, and render correctly at 375px, 768px, and 1280px"
Verifiable. The AI checks contrast ratios, validates token usage, and tests breakpoints.

Act 2: Planning Together

This is the most important part. Good planning means the AI can build autonomously. Bad planning means it keeps asking you questions.

💬

What follows is the actual conversation flow. Blue bubbles on the left are you. Gold bubbles on the right are the AI. Gray bubbles in the center are system events.

Exchange 1 -- “What do you want to build?”
🤖Claude
What do you want to build or accomplish?
🧑You
I want to add OAuth login to our app.
Exchange 2 -- “Let me understand better”
🤖Claude
A few questions to narrow things down:
Who uses this?
End usersAdmin usersBoth
Which OAuth providers?
GoogleGitHubBothCustom
🧑You
End usersBoth (Google + GitHub)Login page
Exchange 3 -- AI goes exploring
⚙️System
🔍
Claude delegates exploration to a Discovery Agent...

This is automatic. You wait. The AI reads your codebase -- file structure, database schemas, API endpoints, existing patterns -- and writes its findings to a discovery document.

Reading file structure
Scanning database schemas
Mapping API endpoints
Finding existing patterns

New projects only: A visual design direction picker guides you through choosing an aesthetic -- Brutalist, Editorial, Dense, or Playful -- with tunable parameters. The selection produces a design blueprint that shapes every wireframe and UI component downstream.

Exchange 4 -- “Here’s what I found”
🤖Claude
Domain model
Entities: User, OAuth Token, Session
Relationships: User has many OAuth Tokens
Gap found: No existing session management -- we’ll need to add this
🧑You
Looks accurate
Exchange 5 -- “Which workflow should we use?”
🤖Claude
I recommend the default workflow:
DefaultPlanner -> Builder -> Reviewerrecommended
AdversarialPlanner -> Builder -> Red Team -> Blue Team -> Reviewer
TDDTest Writer -> Implementer -> Refactorer -> Reviewer
DesignPlanner -> Designer -> Reviewer
HypothesisObserver -> Hypothesizer -> Experimenter -> Analyst
🧑You
Default
Exchange 6 -- “What does success look like?”
🤖Claude
Let’s define 3-7 measurable success criteria:
✅ All tests in tests/auth/ pass
✅ OAuth flow completes in <2s p95
✅ OWASP top 10 -- no critical findings
🧑You
Looks good. Add one more: “Users can link multiple OAuth providers to one account.”
Exchange 7 -- “Here’s how I’ll break this down”
🤖Claude
I’ve decomposed this into four units of work:
Unit 1OAuth Provider Setup (backend)no dependencies
Unit 2Callback Handler (backend)depends on Unit 1
Unit 3Session Management (backend)depends on Unit 1
Unit 4Login UI (frontend)depends on Unit 2 & 3

Units 2 and 3 can run in parallel once Unit 1 is done. Unit 4 waits for both.

🧑You
You review each unit individually:
Unit 1: ApprovedUnit 2: ApprovedUnit 3: ApprovedUnit 4: Approved
Exchange 8 -- “How should we deliver?”
🤖Claude
How would you like to review the work?
Review each unit as a separate PRBuild everything, one PR at the endBuild on main branch
🧑You
Review each unit as a separate PR
Exchange 9 -- “Ready to build!”
🤖Claude
Intent: Add OAuth login
Workflow: Default (Planner → Builder → Reviewer)
Criteria: 4 success criteria defined
Units: 4 units with dependency ordering
Delivery: Separate PRs per unit
Shall I start building, or open a PR for your team to review the spec first?
🧑You
Start buildingOpen spec PR for review
⚙️System
All artifacts committed to ai-dlc/{slug}/main branch. The plan is saved. Time to build.

That’s it. Planning is done.

Nine exchanges. Maybe ten minutes of your time. The AI now has everything it needs to work autonomously. You can step away. You can watch. Either way, the building starts now.

Act 3: Building

Now the AI works. You typed /ai-dlc:execute. Three loops nest inside each other, from big to small.

OUTER: Assembly Line
MIDDLE: Hat Rotation
INNER: Bolt
One focused work session

Outer Loop: The Assembly Line

Units flow through a pipeline. Independent units can build in parallel. Each one unlocks the next.

Unit 1: OAuth Setup
✅ Done
Unit 2: Callback
🔨 Building
Unit 3: Sessions
⏳ Waiting (needs Unit 1)
Unit 4: Login UI
⏳ Waiting (needs 2 & 3)

The dependency graph (DAG) determines the order. Unit 1 has no dependencies, so it starts first. Once Unit 1 completes, Units 2 and 3 can build in parallel. Unit 4 waits for both 2 and 3.

You
Watching. Or away doing something else. Available if needed.
🔨
The Builder
Writing code, running tests, committing changes. Fully autonomous.

Middle Loop: The Hat Rotation

Inside each unit, the AI cycles through hats. Each hat has one job. Quality gates stand between them — every time the Builder finishes a session, it must pass the gates before stopping.

📋 Planner
Creates the implementation plan
🔨 Builder
Writes code, runs commands
🚧 Quality Gates
Tests pass? Types OK? Lint clean?
🔍 Reviewer
Verifies every criterion with evidence
✅ Pass
Unit complete!
→ Next unit
❌ Fail
Feedback given
→ Back to Builder
🚧Quality gates fire on every Builder stop

The gates were detected from your repo tooling during /ai-dlc:elaborate and saved to intent.md frontmatter. The harness reads them and runs each command synchronously — the agent literally cannot stop until all pass. Builders can add unit-specific gates but cannot remove existing ones.

When the Reviewer passes:

The unit is marked complete. Its branch is merged. Claude picks the next ready unit from the dependency graph.

When the Reviewer fails:

The Reviewer writes specific, actionable feedback. “Test for expired tokens is missing” not “needs more tests.” The Builder reads this feedback and iterates.

When a design pass is active, the same rotation applies -- but the Builder becomes a Designer. Instead of production code, it produces wireframes, design tokens, and component specs. The design blueprint from elaboration feeds into every artifact.

Inner Loop: The Bolt (Context Recovery)

This is the clever part. AI agents have limited memory -- called a “context window.” When the memory fills up, a normal AI would forget everything. AI-DLC solves this by saving progress to files on disk. When a new session starts, the AI reads those files and picks up exactly where it left off. This is called a bolt -- one focused work session.

🟢AI starts working
100%
🟢Writing code, running tests
75%
🟡Context getting low
50%
🟠Warning! Save your work
35% ⚠️
🔴Context critical!
25% 🚨
💾AI saves state to files -- session ends
8%
🔄New session starts -- fresh context!
100%
📂AI loads state -- continues exactly where it left off
100%
⚙️System
enforce-iteration.sh fires when a session ends. It checks what work remains. If units are still in progress, it tells the next session to call /ai-dlc:execute to continue. The AI never “forgets” mid-task.

How do these three loops work together?

The assembly line picks the next unit. The hat rotation builds that unit through plan, build, review cycles. And within each hat’s work session, the bolt ensures that even if the AI’s memory runs out, progress is never lost.

A single feature might involve 4 units, each going through 2-3 hat rotations, each rotation spanning 1-3 bolts. That’s potentially hours of autonomous building -- all from one planning conversation.

Act 4: Quality and Safety

Two systems keep everything on track: quality gates that enforce standards, and hooks that run silently in the background.

Quality Gates -- The Tollbooths

Every time the AI tries to stop, it must pass through a gate. Quality gates are harness-enforced — defined in frontmatter, run by hooks, no exceptions.

🚧 Quality Checkpoint 🚧

Enforced by quality-gate.sh on every Stop — the agent cannot bypass these.

tests
All pass
build
No errors
lint
No violations
→ Gate opens -- proceed to the next hat
tests
Failures!
build
Errors!
lint
Issues!
← Go back and fix -- cannot advance

How Quality Gates Work: Detection → Definition → Enforcement

Quality gates aren't hardcoded. They're discovered from your repo, written into version-controlled frontmatter, and enforced mechanically by the harness — all without any manual configuration.

Phase 1Auto-Detected During /ai-dlc:elaborate

The elaborate-discover skill scans your repository for tooling and proposes the right quality gate commands. You confirm or customize them.

package.jsonnpm test, npm run typecheck, npm run lint
bun.lockb / bun.lockbun test (overrides npm)
go.modgo test ./..., go vet ./...
pyproject.tomlpytest, mypy .
Cargo.tomlcargo test, cargo clippy
Makefilemake test (if target exists)
Phase 2Written to Intent & Unit Frontmatter

Confirmed gates are saved to intent.md. Builders can add unit-specific gates during construction — but never remove existing ones (the ratchet rule).

intent.md (intent-level defaults)
---
title: Add auth middleware
quality_gates:
  - name: tests
    command: npm test
  - name: typecheck
    command: npm run typecheck
  - name: lint
    command: npm run lint
---
unit-01-auth-middleware.md (unit additions)
---
title: Implement JWT validation
quality_gates:
  - name: auth-integration
    command: npm test -- --grep auth
---

Intent gates + unit gates are merged additively. All run on every Stop during this unit.

Phase 3Enforced on Every Stop During Construction

Whenever a Builder, Implementer, or Refactorer tries to stop, quality-gate.sh fires synchronously. It reads quality_gates: from intent and unit frontmatter, runs each command, and blocks the stop if any fail.

# Agent tries to stop after a coding session
quality-gate.sh fires (synchronous Stop hook)
  # Reads intent.md + unit frontmatter gates
  Running: npm test✓ PASS
  Running: npm run typecheck✗ FAIL — 3 type errors
BLOCKED — agent cannot stop. Must fix type errors first.
# After fixing type errors and retrying:
  Running: npm test✓ PASS
  Running: npm run typecheck✓ PASS
ALLOWED — all gates pass. Agent stops cleanly.

Planner, Reviewer, and Designer hats skip gate enforcement — they're not writing code. Only building hats (Builder, Implementer, Refactorer) are enforced.

Three hard gates stand between hats. The harness enforces them mechanically — the agent cannot bypass them:

1. Plan Gate

Planner must save a complete implementation plan before the Builder can start.

No building without a blueprint.

2. Quality Gate

Harness-enforced: quality_gates: in frontmatter are executed by quality-gate.sh on every Stop. The agent cannot advance until all gates pass.

No review of broken code.

3. Criteria Gate

Every success criterion verified with concrete evidence before marking the unit done.

No shipping without proof.

The Safety Net -- Hooks

Hooks are automated scripts that fire at specific moments during a session. They run silently. You never see them. But they keep everything honest.

SESSION START
A new AI session begins
⚙️ inject-context
Loads all saved state: current hat, plan, blockers, scratchpad, iteration number
AI reads files
Understanding the codebase
AI edits files
Writing implementation code
⚙️ workflow-guard
"Are you wearing a hat? Stay in your lane."
⚙️ prompt-guard
"Is this safe content? No injections, no jailbreaks."
AI runs a tool
Executing commands, running tests
⚙️ context-monitor
"How much memory is left? Getting low? Save your work."
AI spawns a subagent
Delegating to a specialist
⚙️ subagent-hook
"Inject AI-DLC context into the new agent so it knows the plan."
User tries /plan mode
Attempting to use a built-in command
⚙️ redirect-plan-mode
"Use /ai-dlc:elaborate instead -- that's the AI-DLC way."
SESSION END
Context window full, or work complete
⚙️ enforce-iteration
"Work remains! Call /ai-dlc:execute to continue." -- Ensures nothing falls through the cracks.

Why all this machinery?

AI is powerful but imperfect. Without guardrails, an AI might skip tests, write code outside its current task, or lose track of progress when a session ends. These hooks and gates create a safety net -- not to slow things down, but to keep the AI honest and productive. Think of them as the rules of the road that make autonomous driving possible.

Act 5: Finishing and Learning

All units are done. The code is written. The tests pass. But the story isn’t over yet. Four stages remain.

1

Integration Check

All the individual pieces are done. Now they need to work together.

The merged code is validated as a whole. Cross-unit tests run. Does the callback handler actually connect to the session manager? Does the login UI correctly call the callback endpoint? If issues are found, specific units get sent back for rework -- not the entire feature.

2

Delivery

🤖Claude
All units complete! Here’s what was built:
4 units implemented and reviewed
4/4 criteria verified with evidence
47 files changed across backend and frontend
23 tests added, all passing

Creating a pull request now...

🧑You
Review the PR. Check the diffs. Read the summary. The PR description includes everything -- what changed, why, how it was tested, and links to each unit’s criteria verification. Then approve and merge.
3

Operations

The code is deployed. Now it needs to be operated.

🤖Claude
I found 3 operational tasks defined during execution:
Scheduled: Token cleanup job -- runs daily, removes expired sessions
Reactive: Alert handler for auth failure spike -- triggers investigation playbook
Runbook: OAuth provider rotation -- step-by-step credential rotation procedure
🧑You
Review operational specs. Deploy scheduled jobs. Verify monitoring and alerts are wired up. The AI manages the operational lifecycle alongside the code it built.

Operations aren’t an afterthought -- they’re file-based specs created during execution, living alongside the code they support.

4

Reflection

This is what separates AI-DLC from “just running an AI.” The system learns from every cycle.

🤖Claude
Here’s what I learned from this cycle:
Metrics: 4 units, 12 iterations, 3 reviewer round-trips
Friction point: The builder had trouble with OAuth callback -- test fixtures for mock OAuth servers were missing
What worked: TDD-style approach for session management produced clean code on the first reviewer pass
Recommendations:
  • Add eslint-plugin-security to quality gates
  • Create a shared OAuth mock server fixture
  • Consider the adversarial workflow for security-sensitive features
🧑You
You validate the AI’s analysis. Then choose what’s next:
Apply recommendationsStart a follow-upArchive and close

Why reflection matters: Each cycle makes the next one better. The AI remembers what worked, what didn’t, and adapts. The first feature takes the longest. Every subsequent one is faster and smoother.

Your Toolkit

Every command you need, organized by when you’ll reach for it

Before You Build

/ai-dlc:setup
Configure AI-DLC for your project
  • Auto-detects your VCS, hosting, CI/CD, and connected tools
  • Creates .ai-dlc/settings.yml with your preferences
  • You only run this once
/ai-dlc:ideate
Surface improvement ideas from your codebase
  • AI analyzes your code across 5 dimensions
  • Each idea survives adversarial filtering
  • Great for “what should we work on next?”
/ai-dlc:backlog
Parking lot for ideas not ready yet
  • add, list, review, promote
  • Your project’s idea shelf
/ai-dlc:seed
Plant ideas that surface at the right time
  • Save ideas with trigger conditions
  • Seeds auto-surface during relevant elaboration
/ai-dlc:autopilot
Full autonomous lifecycle in one command
  • Elaborate -> execute -> deliver end-to-end
  • Pauses on ambiguity, >5 units, and before creating a PR

While You Build

/ai-dlc:elaborate
Plan your work collaboratively

The main entry point: define intent, explore domain, decompose into units, set success criteria.

/ai-dlc:execute
Run the autonomous build loop

Picks up units from the DAG, spawns hatted agents, iterates until done.

/ai-dlc:refine
Change specs mid-construction

Preserves frontmatter and state, re-queues affected units.

/ai-dlc:quick
Skip everything for tiny fixes

Typos, config tweaks, import fixes. No state files, no subagents.

When Things Go Sideways

/ai-dlc:resume
Pick up where you left off
  • Lost your session? Context compacted?
  • Finds previous work from filesystem or git branches
  • Creates worktrees and restores state
/ai-dlc:reset
Start fresh
  • Clears all AI-DLC state
  • Preserves your git commits and branches
/ai-dlc:cleanup
Remove stale worktrees
  • Scans for orphaned and merged worktrees
  • Asks before deleting anything

After You Build

/ai-dlc:reflect
Analyze what happened

Produces reflection.md with what worked and settings-recommendations.md with improvements.

/ai-dlc:operate
Manage post-deployment operations

List, execute, deploy, monitor, and teardown operational tasks.

/ai-dlc:followup
Iterate on a completed intent

Creates a new intent that builds on a previous one, carrying forward all context.

/ai-dlc:compound
Capture learnings from this session

Analyzes git history and intent state, writes structured learning files.

Internal Skills (run automatically)

These fire behind the scenes -- you never call them directly:

/ai-dlc:advance/ai-dlc:fail/ai-dlc:integrate/ai-dlc:elaborate-discover/ai-dlc:elaborate-wireframes/ai-dlc:elaborate-ticket-sync/ai-dlc:fundamentals/ai-dlc:completion-criteria/ai-dlc:backpressure/ai-dlc:blockers

The Cycle Continues

Every completed intent makes the next one better. Reflections feed into seeds. Seeds surface during elaboration. Compound learnings inform future builders. The methodology improves itself -- and your codebase -- with every cycle.

/plugin marketplace add thebushidocollective/ai-dlc
/plugin install ai-dlc@thebushidocollective-ai-dlc --scope project

From the Blog

Design as a First-Class Concern

AI-DLC now includes a design direction system and persistent project knowledge layer, making visual design and domain understanding part of the methodology — not an afterthought.

Design Providers and Knowledge Synthesis

AI-DLC gains first-class integration with six design tools and a persistent knowledge system that gives agents institutional memory across features.

First-Class Passes

AI-DLC now supports typed iteration through disciplinary lenses — design, product, dev — where each pass shapes how hats behave and which workflows are available.