Skip to content
🤖 Consolidated, AI-optimized BMAD docs: llms-full.txt. Fetch this plain text file for complete context.

How to Run Test Design

Use TEA’s *test-design workflow to create comprehensive test plans with risk assessment and coverage strategies.


System-level (Phase 3):

  • After architecture is complete
  • Before implementation-readiness gate
  • To validate architecture testability

Epic-level (Phase 4):

  • At the start of each epic
  • Before implementing stories in the epic
  • To identify epic-specific testing needs

  • BMad Method installed
  • TEA agent available
  • For system-level: Architecture document complete
  • For epic-level: Epic defined with stories

Start a fresh chat and load the TEA (Test Architect) agent.

*test-design

TEA will ask if you want:

  • System-level - For architecture testability review (Phase 3)
  • Epic-level - For epic-specific test planning (Phase 4)

For system-level:

  • Point to your architecture document
  • Reference any ADRs (Architecture Decision Records)

For epic-level:

  • Specify which epic you’re planning
  • Reference the epic file with stories

TEA generates a comprehensive test design document.


System-Level Output (test-design-system.md)

Section titled “System-Level Output (test-design-system.md)”
  • Testability review of architecture
  • ADR → test mapping
  • Architecturally Significant Requirements (ASRs)
  • Environment needs
  • Test infrastructure recommendations
  • Risk assessment for the epic
  • Test priorities
  • Coverage plan
  • Regression hotspots (for brownfield)
  • Integration risks
  • Mitigation strategies

StageTest Design Focus
Phase 3System-level testability review
Phase 4Per-epic risk assessment and test plan
StageTest Design Focus
Phase 3System-level + existing test baseline
Phase 4Regression hotspots, integration risks
StageTest Design Focus
Phase 3Compliance-aware testability
Phase 4Security/performance/compliance focus

  • Run system-level test-design right after architecture
  • Run epic-level test-design at the start of each epic
  • Update test design if ADRs change
  • Use the output to guide *atdd and *automate workflows