How to Run Trace with TEA
How to Run Trace with TEA
Section titled âHow to Run Trace with TEAâUse TEAâs trace workflow for requirements traceability and quality gate decisions. This is a two-phase workflow: Phase 1 analyzes coverage, Phase 2 makes the go/no-go decision.
When to Use This
Section titled âWhen to Use ThisâPhase 1: Requirements Traceability
Section titled âPhase 1: Requirements Traceabilityâ- Map acceptance criteria to implemented tests
- Identify coverage gaps
- Prioritize missing tests
- Refresh coverage after each story/epic
Phase 2: Quality Gate Decision
Section titled âPhase 2: Quality Gate Decisionâ- Make go/no-go decision for release
- Validate coverage meets thresholds
- Document gate decision with evidence
- Support business-approved waivers
Prerequisites
Section titled âPrerequisitesâ- BMad Method installed
- TEA agent available
- Requirements defined (stories, acceptance criteria, test design)
- Tests implemented
- For brownfield: Existing codebase with tests
1. Run the Trace Workflow
Section titled â1. Run the Trace Workflowâtrace2. Specify Phase
Section titled â2. Specify PhaseâTEA will ask which phase youâre running.
Phase 1: Requirements Traceability
- Analyze coverage
- Identify gaps
- Generate recommendations
Phase 2: Quality Gate Decision
- Make PASS/CONCERNS/FAIL/WAIVED decision
- Requires Phase 1 complete
Typical flow: Run Phase 1 first, review gaps, then run Phase 2 for gate decision.
Phase 1: Requirements Traceability
Section titled âPhase 1: Requirements Traceabilityâ3. Provide Requirements Source
Section titled â3. Provide Requirements SourceâTEA will ask where requirements are defined.
Options:
| Source | Example | Best For |
|---|---|---|
| Story file | story-profile-management.md | Single story coverage |
| Test design | test-design-epic-1.md | Epic coverage |
| PRD | PRD.md | System-level coverage |
| Multiple | All of the above | Comprehensive analysis |
Example Response:
Requirements:- story-profile-management.md (acceptance criteria)- test-design-epic-1.md (test priorities)4. Specify Test Location
Section titled â4. Specify Test LocationâTEA will ask where tests are located.
Example:
Test location: tests/Include:- tests/api/- tests/e2e/5. Specify Focus Areas (Optional)
Section titled â5. Specify Focus Areas (Optional)âExample:
Focus on:- Profile CRUD operations- Validation scenarios- Authorization checks6. Review Coverage Matrix
Section titled â6. Review Coverage MatrixâTEA generates a comprehensive traceability matrix.
Traceability Matrix (traceability-matrix.md):
Section titled âTraceability Matrix (traceability-matrix.md):â# Requirements Traceability Matrix
**Date:** 2026-01-13**Scope:** Epic 1 - User Profile Management**Phase:** Phase 1 (Traceability Analysis)
## Coverage Summary
| Metric | Count | Percentage || ---------------------- | ----- | ---------- || **Total Requirements** | 15 | 100% || **Full Coverage** | 11 | 73% || **Partial Coverage** | 3 | 20% || **No Coverage** | 1 | 7% |
### By Priority
| Priority | Total | Covered | Percentage || -------- | ----- | ------- | ----------------- || **P0** | 5 | 5 | 100% â
|| **P1** | 6 | 5 | 83% â ď¸ || **P2** | 3 | 1 | 33% â ď¸ || **P3** | 1 | 0 | 0% â
(acceptable) |
---
## Detailed Traceability
### â
Requirement 1: User can view their profile (P0)
**Acceptance Criteria:**- User navigates to /profile- Profile displays name, email, avatar- Data is current (not cached)
**Test Coverage:** FULL â
**Tests:**- `tests/e2e/profile-view.spec.ts:15` - "should display profile page with current data" - â
Navigates to /profile - â
Verifies name, email visible - â
Verifies avatar displayed - â
Validates data freshness via API assertion
- `tests/api/profile.spec.ts:8` - "should fetch user profile via API" - â
Calls GET /api/profile - â
Validates response schema - â
Confirms all fields present
---
### â ď¸ Requirement 2: User can edit profile (P0)
**Acceptance Criteria:**- User clicks "Edit Profile"- Can modify name, email, bio- Can upload avatar- Changes are persisted- Success message shown
**Test Coverage:** PARTIAL â ď¸
**Tests:**- `tests/e2e/profile-edit.spec.ts:22` - "should edit and save profile" - â
Clicks edit button - â
Modifies name and email - â ď¸ **Does NOT test bio field** - â **Does NOT test avatar upload** - â
Verifies persistence - â
Verifies success message
- `tests/api/profile.spec.ts:25` - "should update profile via PATCH" - â
Calls PATCH /api/profile - â
Validates update response - â ď¸ **Only tests name/email, not bio/avatar**
**Missing Coverage:**- Bio field not tested in E2E or API- Avatar upload not tested
**Gap Severity:** HIGH (P0 requirement, critical path)
---
### â
Requirement 3: Invalid email shows validation error (P1)
**Acceptance Criteria:**- Enter invalid email format- See error message- Cannot save changes
**Test Coverage:** FULL â
**Tests:**- `tests/e2e/profile-edit.spec.ts:45` - "should show validation error for invalid email"- `tests/api/profile.spec.ts:50` - "should return 400 for invalid email"
---
### â Requirement 15: Profile export as PDF (P2)
**Acceptance Criteria:**- User clicks "Export Profile"- PDF downloads with profile data
**Test Coverage:** NONE â
**Gap Analysis:**- **Priority:** P2 (medium)- **Risk:** Low (non-critical feature)- **Recommendation:** Add in next iteration (not blocking for release)
---
## Gap Prioritization
### Critical Gaps (Must Fix Before Release)
| Gap | Requirement | Priority | Risk | Recommendation || --- | ------------------------ | -------- | ---- | ------------------- || 1 | Bio field not tested | P0 | High | Add E2E + API tests || 2 | Avatar upload not tested | P0 | High | Add E2E + API tests |
**Estimated Effort:** 3 hours**Owner:** QA team**Deadline:** Before release
### Non-Critical Gaps (Can Defer)
| Gap | Requirement | Priority | Risk | Recommendation || --- | ------------------------- | -------- | ---- | ------------------- || 3 | Profile export not tested | P2 | Low | Add in v1.3 release |
**Estimated Effort:** 2 hours**Owner:** QA team**Deadline:** Next release (February)
---
## Recommendations
### 1. Add Bio Field Tests
**Tests Needed (Vanilla Playwright):**```typescript// tests/e2e/profile-edit.spec.tstest('should edit bio field', async ({ page }) => { await page.goto('/profile'); await page.getByRole('button', { name: 'Edit' }).click(); await page.getByLabel('Bio').fill('New bio text'); await page.getByRole('button', { name: 'Save' }).click(); await expect(page.getByText('New bio text')).toBeVisible();});
// tests/api/profile.spec.tstest('should update bio via API', async ({ request }) => { const response = await request.patch('/api/profile', { data: { bio: 'Updated bio' } }); expect(response.ok()).toBeTruthy(); const { bio } = await response.json(); expect(bio).toBe('Updated bio');});With Playwright Utils:
import { test } from '../support/fixtures'; // Composed with authToken
test('should edit bio field', async ({ page, authToken }) => { await page.goto('/profile'); await page.getByRole('button', { name: 'Edit' }).click(); await page.getByLabel('Bio').fill('New bio text'); await page.getByRole('button', { name: 'Save' }).click(); await expect(page.getByText('New bio text')).toBeVisible();});
// tests/api/profile.spec.tsimport { test as base, expect } from '@playwright/test';import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';import { mergeTests } from '@playwright/test';
// Merge API request + auth fixturesconst authFixtureTest = base.extend(createAuthFixtures());const test = mergeTests(apiRequestFixture, authFixtureTest);
test('should update bio via API', async ({ apiRequest, authToken }) => { const { status, body } = await apiRequest({ method: 'PATCH', path: '/api/profile', body: { bio: 'Updated bio' }, headers: { Authorization: `Bearer ${authToken}` } });
expect(status).toBe(200); expect(body.bio).toBe('Updated bio');});Note: authToken requires auth-session fixture setup. See Integrate Playwright Utils.
2. Add Avatar Upload Tests
Section titled â2. Add Avatar Upload TestsâTests Needed:
test('should upload avatar image', async ({ page }) => { await page.goto('/profile'); await page.getByRole('button', { name: 'Edit' }).click();
// Upload file await page.setInputFiles('[type="file"]', 'fixtures/avatar.png'); await page.getByRole('button', { name: 'Save' }).click();
// Verify uploaded image displays await expect(page.locator('img[alt="Profile avatar"]')).toBeVisible();});
// tests/api/profile.spec.tsimport { test, expect } from '@playwright/test';import fs from 'fs/promises';
test('should accept valid image upload', async ({ request }) => { const response = await request.post('/api/profile/avatar', { multipart: { file: { name: 'avatar.png', mimeType: 'image/png', buffer: await fs.readFile('fixtures/avatar.png') } } }); expect(response.ok()).toBeTruthy();});Next Steps
Section titled âNext StepsâAfter reviewing traceability:
- Fix critical gaps - Add tests for P0/P1 requirements
- Run
test-review- Ensure new tests meet quality standards - Run Phase 2 - Make gate decision after gaps addressed
---
## Phase 2: Quality Gate Decision
After Phase 1 coverage analysis is complete, run Phase 2 for the gate decision.
**Prerequisites:**- Phase 1 traceability matrix complete- Test execution results available (must have test results)
**Note:** Phase 2 will skip if test execution results aren't provided. The workflow requires actual test run results to make gate decisions.
### 7. Run Phase 2trace
Select "Phase 2: Quality Gate Decision"
### 8. Provide Additional Context
TEA will ask for:
**Gate Type:**- Story gate (small release)- Epic gate (larger release)- Release gate (production deployment)- Hotfix gate (emergency fix)
**Decision Mode:**- **Deterministic** - Rule-based (coverage %, quality scores)- **Manual** - Team decision with TEA guidance
**Example:**Gate type: Epic gate Decision mode: Deterministic
### 9. Provide Supporting Evidence
TEA will request:
**Phase 1 Results:**traceability-matrix.md (from Phase 1)
**Test Quality (Optional):**test-review.md (from test-review)
**NFR Assessment (Optional):**nfr-assessment.md (from nfr-assess)
### 10. Review Gate Decision
TEA makes evidence-based gate decision and writes to separate file.
#### Gate Decision (`gate-decision-{gate_type}-{story_id}.md`):
```markdown---
# Phase 2: Quality Gate Decision
**Gate Type:** Epic Gate**Decision:** PASS â
**Date:** 2026-01-13**Approvers:** Product Manager, Tech Lead, QA Lead
## Decision Summary
**Verdict:** Ready to release
**Evidence:**- P0 coverage: 100% (5/5 requirements)- P1 coverage: 100% (6/6 requirements)- P2 coverage: 33% (1/3 requirements) - acceptable- Test quality score: 84/100- NFR assessment: PASS
## Coverage Analysis
| Priority | Required Coverage | Actual Coverage | Status || -------- | ----------------- | --------------- | --------------------- || **P0** | 100% | 100% | â
PASS || **P1** | 90% | 100% | â
PASS || **P2** | 50% | 33% | â ď¸ Below (acceptable) || **P3** | 20% | 0% | â
PASS (low priority) |
**Rationale:**- All critical path (P0) requirements fully tested- All high-value (P1) requirements fully tested- P2 gap (profile export) is low risk and deferred to next release
## Quality Metrics
| Metric | Threshold | Actual | Status || ------------------ | --------- | ------ | ------ || P0/P1 Coverage | >95% | 100% | â
|| Test Quality Score | >80 | 84 | â
|| NFR Status | PASS | PASS | â
|
## Risks and Mitigations
### Accepted Risks
**Risk 1: Profile export not tested (P2)**- **Impact:** Medium (users can't export profile)- **Mitigation:** Feature flag disabled by default- **Plan:** Add tests in v1.3 release (February)- **Monitoring:** Track feature flag usage
## Approvals
- [x] **Product Manager** - Business requirements met (Approved: 2026-01-13)- [x] **Tech Lead** - Technical quality acceptable (Approved: 2026-01-13)- [x] **QA Lead** - Test coverage sufficient (Approved: 2026-01-13)
## Next Steps
### Deployment1. Merge to main branch2. Deploy to staging3. Run smoke tests in staging4. Deploy to production5. Monitor for 24 hours
### Monitoring- Set alerts for profile endpoint (P99 > 200ms)- Track error rates (target: <0.1%)- Monitor profile export feature flag usage
### Future Work- Add profile export tests (v1.3)- Expand P2 coverage to 50%Gate Decision Rules
Section titled âGate Decision RulesâTEA uses deterministic rules when decision_mode = âdeterministicâ:
| P0 Coverage | P1 Coverage | Overall Coverage | Decision |
|---|---|---|---|
| 100% | âĽ90% | âĽ80% | PASS â |
| 100% | 80-89% | âĽ80% | CONCERNS â ď¸ |
| <100% | Any | Any | FAIL â |
| Any | <80% | Any | FAIL â |
| Any | Any | <80% | FAIL â |
| Any | Any | Any | WAIVED âď¸ (with approval) |
Detailed Rules:
- PASS: P0=100%, P1âĽ90%, OverallâĽ80%
- CONCERNS: P0=100%, P1 80-89%, OverallâĽ80% (below threshold but not critical)
- FAIL: P0<100% OR P1<80% OR Overall<80% (critical gaps)
PASS â : All criteria met, ready to release
CONCERNS â ď¸: Some criteria not met, but:
- Mitigation plan exists
- Risk is acceptable
- Team approves proceeding
- Monitoring in place
FAIL â: Critical criteria not met:
- P0 requirements not tested
- Critical security vulnerabilities
- System is broken
- Cannot deploy
WAIVED âď¸: Business approves proceeding despite concerns:
- Documented business justification
- Accepted risks quantified
- Approver signatures
- Future plans documented
Example CONCERNS Decision
Section titled âExample CONCERNS Decisionâ## Decision Summary
**Verdict:** CONCERNS â ď¸ - Proceed with monitoring
**Evidence:**- P0 coverage: 100%- P1 coverage: 85% (below 90% target)- Test quality: 78/100 (below 80 target)
**Gaps:**- 1 P1 requirement not tested (avatar upload)- Test quality score slightly below threshold
**Mitigation:**- Avatar upload not critical for v1.2 launch- Test quality issues are minor (no flakiness)- Monitoring alerts configured
**Approvals:**- Product Manager: APPROVED (business priority to launch)- Tech Lead: APPROVED (technical risk acceptable)Example FAIL Decision
Section titled âExample FAIL Decisionâ## Decision Summary
**Verdict:** FAIL â - Cannot release
**Evidence:**- P0 coverage: 60% (below 95% threshold)- Critical security vulnerability (CVE-2024-12345)- Test quality: 55/100
**Blockers:**1. **Login flow not tested** (P0 requirement) - Critical path completely untested - Must add E2E and API tests
2. **SQL injection vulnerability** - Critical security issue - Must fix before deployment
**Actions Required:**1. Add login tests (QA team, 2 days)2. Fix SQL injection (backend team, 1 day)3. Re-run security scan (DevOps, 1 hour)4. Re-run trace after fixes
**Cannot proceed until all blockers resolved.**What You Get
Section titled âWhat You GetâPhase 1: Traceability Matrix
Section titled âPhase 1: Traceability Matrixâ- Requirement-to-test mapping
- Coverage classification (FULL/PARTIAL/NONE)
- Gap identification with priorities
- Actionable recommendations
Phase 2: Gate Decision
Section titled âPhase 2: Gate Decisionâ- Go/no-go verdict (PASS/CONCERNS/FAIL/WAIVED)
- Evidence summary
- Approval signatures
- Next steps and monitoring plan
Usage Patterns
Section titled âUsage PatternsâGreenfield Projects
Section titled âGreenfield ProjectsâPhase 3:
After architecture complete:1. Run test-design (system-level)2. Run trace Phase 1 (baseline)3. Use for implementation-readiness gatePhase 4:
After each epic/story:1. Run trace Phase 1 (refresh coverage)2. Identify gaps3. Add missing testsRelease Gate:
Before deployment:1. Run trace Phase 1 (final coverage check)2. Run trace Phase 2 (make gate decision)3. Get approvals4. Deploy (if PASS or WAIVED)Brownfield Projects
Section titled âBrownfield ProjectsâPhase 2:
Before planning new work:1. Run trace Phase 1 (establish baseline)2. Understand existing coverage3. Plan testing strategyPhase 4:
After each epic/story:1. Run trace Phase 1 (refresh)2. Compare to baseline3. Track coverage improvementRelease Gate:
Before deployment:1. Run trace Phase 1 (final check)2. Run trace Phase 2 (gate decision)3. Compare to baseline4. Deploy if coverage maintained or improvedRun Phase 1 Frequently
Section titled âRun Phase 1 FrequentlyâDonât wait until release gate:
After Story 1: trace Phase 1 (identify gaps early)After Story 2: trace Phase 1 (refresh)After Story 3: trace Phase 1 (refresh)Before Release: trace Phase 1 + Phase 2 (final gate)Benefit: Catch gaps early when theyâre cheap to fix.
Use Coverage Trends
Section titled âUse Coverage TrendsâTrack improvement over time:
## Coverage Trend
| Date | Epic | P0/P1 Coverage | Quality Score | Status || ---------- | -------- | -------------- | ------------- | -------------- || 2026-01-01 | Baseline | 45% | - | Starting point || 2026-01-08 | Epic 1 | 78% | 72 | Improving || 2026-01-15 | Epic 2 | 92% | 84 | Near target || 2026-01-20 | Epic 3 | 100% | 88 | Ready! |Set Coverage Targets by Priority
Section titled âSet Coverage Targets by PriorityâDonât aim for 100% across all priorities:
Recommended Targets:
- P0: 100% (critical path must be tested)
- P1: 90% (high-value scenarios)
- P2: 50% (nice-to-have features)
- P3: 20% (low-value edge cases)
Use Classification Strategically
Section titled âUse Classification StrategicallyâFULL â : Requirement completely tested
- E2E test covers full user workflow
- API test validates backend behavior
- All acceptance criteria covered
PARTIAL â ď¸: Some aspects tested
- E2E test exists but missing scenarios
- API test exists but incomplete
- Some acceptance criteria not covered
NONE â: No tests exist
- Requirement identified but not tested
- May be intentional (low priority) or oversight
Classification helps prioritize:
- Fix NONE coverage for P0/P1 requirements first
- Enhance PARTIAL coverage for P0 requirements
- Accept PARTIAL or NONE for P2/P3 if time-constrained
Automate Gate Decisions
Section titled âAutomate Gate DecisionsâUse traceability in CI:
- name: Check coverage run: | # Run trace Phase 1 # Parse coverage percentages if [ $P0_COVERAGE -lt 95 ]; then echo "P0 coverage below 95%" exit 1 fiDocument Waivers Clearly
Section titled âDocument Waivers ClearlyâIf proceeding with WAIVED:
Required:
## Waiver Documentation
**Waived By:** VP Engineering, Product Lead**Date:** 2026-01-15**Gate Type:** Release Gate v1.2
**Justification:**Business critical to launch by Q1 for investor demo.Performance concerns acceptable for initial user base.
**Conditions:**- Set monitoring alerts for P99 > 300ms- Plan optimization for v1.3 (due February 28)- Monitor user feedback closely
**Accepted Risks:**- 1% of users may experience 350ms latency- Avatar upload feature incomplete- Profile export deferred to next release
**Quantified Impact:**- Affects <100 users at current scale- Workaround exists (manual export)- Monitoring will catch issues early
**Approvals:**- VP Engineering: [Signature] Date: 2026-01-15- Product Lead: [Signature] Date: 2026-01-15- QA Lead: [Signature] Date: 2026-01-15Common Issues
Section titled âCommon IssuesâToo Many Gaps to Fix
Section titled âToo Many Gaps to FixâProblem: Phase 1 shows 50 uncovered requirements.
Solution: Prioritize ruthlessly:
- Fix all P0 gaps (critical path)
- Fix high-risk P1 gaps
- Accept low-risk P1 gaps with mitigation
- Defer all P2/P3 gaps
Donât try to fix everything - focus on what matters for release.
Canât Find Test Coverage
Section titled âCanât Find Test CoverageâProblem: Tests exist but TEA canât map them to requirements.
Cause: Tests donât reference requirements.
Solution: Add traceability comments:
test('should display profile', async ({ page }) => { // Covers: Requirement 1 - User can view profile // Acceptance criteria: Navigate to /profile, see name/email await page.goto('/profile'); await expect(page.getByText('Test User')).toBeVisible();});Or use test IDs:
test('[REQ-1] should display profile', async ({ page }) => { // Test code...});Unclear What âFULLâ vs âPARTIALâ Means
Section titled âUnclear What âFULLâ vs âPARTIALâ MeansâFULL â : All acceptance criteria tested
Requirement: User can edit profileAcceptance criteria: - Can modify name â
Tested - Can modify email â
Tested - Can upload avatar â
Tested - Changes persist â
TestedResult: FULL coveragePARTIAL â ď¸: Some criteria tested, some not
Requirement: User can edit profileAcceptance criteria: - Can modify name â
Tested - Can modify email â
Tested - Can upload avatar â Not tested - Changes persist â
TestedResult: PARTIAL coverage (3/4 criteria)Gate Decision Unclear
Section titled âGate Decision UnclearâProblem: Not sure if PASS or CONCERNS is appropriate.
Guideline:
Use PASS â if:
- All P0 requirements 100% covered
- P1 requirements >90% covered
- No critical issues
- NFRs met
Use CONCERNS â ď¸ if:
- P1 coverage 85-90% (close to threshold)
- Minor quality issues (score 70-79)
- NFRs have mitigation plans
- Team agrees risk is acceptable
Use FAIL â if:
- P0 coverage <100% (critical path gaps)
- P1 coverage <85%
- Critical security/performance issues
- No mitigation possible
When in doubt, use CONCERNS and document the risk.
Related Guides
Section titled âRelated Guidesâ- How to Run Test Design - Provides requirements for traceability
- How to Run Test Review - Quality scores feed gate
- How to Run NFR Assessment - NFR status feeds gate
Understanding the Concepts
Section titled âUnderstanding the Conceptsâ- Risk-Based Testing - Why P0 vs P3 matters
- TEA Overview - Gate decisions in context
Reference
Section titled âReferenceâ- Command: *trace - Full command reference
- TEA Configuration - Config options
Generated with BMad Method - TEA (Test Architect)