Using TEA with Existing Tests (Brownfield)
Using TEA with Existing Tests (Brownfield)
Section titled âUsing TEA with Existing Tests (Brownfield)âUse TEA on brownfield projects (existing codebases with legacy tests) to establish coverage baselines, identify gaps, and improve test quality without starting from scratch.
When to Use This
Section titled âWhen to Use Thisâ- Existing codebase with some tests already written
- Legacy test suite needs quality improvement
- Adding features to existing application
- Need to understand current test coverage
- Want to prevent regression as you add features
Prerequisites
Section titled âPrerequisitesâ- BMad Method installed
- TEA agent available
- Existing codebase with tests (even if incomplete or low quality)
- Tests run successfully (or at least can be executed)
Note: If your codebase is completely undocumented, run document-project first to create baseline documentation.
Brownfield Strategy
Section titled âBrownfield StrategyâPhase 1: Establish Baseline
Section titled âPhase 1: Establish BaselineâUnderstand what you have before changing anything.
Step 1: Baseline Coverage with trace
Section titled âStep 1: Baseline Coverage with traceâRun trace Phase 1 to map existing tests to requirements:
traceSelect: Phase 1 (Requirements Traceability)
Provide:
- Existing requirements docs (PRD, user stories, feature specs)
- Test location (
tests/or wherever tests live) - Focus areas (specific features if large codebase)
Output: traceability-matrix.md showing:
- Which requirements have tests
- Which requirements lack coverage
- Coverage classification (FULL/PARTIAL/NONE)
- Gap prioritization
Example Baseline:
# Baseline Coverage (Before Improvements)
**Total Requirements:** 50**Full Coverage:** 15 (30%)**Partial Coverage:** 20 (40%)**No Coverage:** 15 (30%)
**By Priority:**- P0: 50% coverage (5/10) â Critical gap- P1: 40% coverage (8/20) â ď¸ Needs improvement- P2: 20% coverage (2/10) â
AcceptableThis baseline becomes your improvement target.
Step 2: Quality Audit with test-review
Section titled âStep 2: Quality Audit with test-reviewâRun test-review on existing tests:
test-review tests/Output: test-review.md with quality score and issues.
Common Brownfield Issues:
- Hard waits everywhere (
page.waitForTimeout(5000)) - Fragile CSS selectors (
.class > div:nth-child(3)) - No test isolation (tests depend on execution order)
- Try-catch for flow control
- Tests donât clean up (leave test data in DB)
Example Baseline Quality:
# Quality Score: 55/100
**Critical Issues:** 12- 8 hard waits- 4 conditional flow control
**Recommendations:** 25- Extract fixtures- Improve selectors- Add network assertionsThis shows where to focus improvement efforts.
Phase 2: Prioritize Improvements
Section titled âPhase 2: Prioritize ImprovementsâDonât try to fix everything at once.
Focus on Critical Path First
Section titled âFocus on Critical Path FirstâPriority 1: P0 Requirements
Goal: Get P0 coverage to 100%
Actions:1. Identify P0 requirements with no tests (from trace)2. Run `automate` to generate tests for missing P0 scenarios3. Fix critical quality issues in P0 tests (from test-review)Priority 2: Fix Flaky Tests
Goal: Eliminate flakiness
Actions:1. Identify tests with hard waits (from test-review)2. Replace with network-first patterns3. Run burn-in loops to verify stabilityExample Modernization:
Before (Flaky - Hard Waits):
test('checkout completes', async ({ page }) => { await page.click('button[name="checkout"]'); await page.waitForTimeout(5000); // â Flaky await expect(page.locator('.confirmation')).toBeVisible();});After (Network-First - Vanilla):
test('checkout completes', async ({ page }) => { const checkoutPromise = page.waitForResponse( resp => resp.url().includes('/api/checkout') && resp.ok() ); await page.click('button[name="checkout"]'); await checkoutPromise; // â
Deterministic await expect(page.locator('.confirmation')).toBeVisible();});After (With Playwright Utils - Cleaner API):
import { test } from '@seontechnologies/playwright-utils/fixtures';import { expect } from '@playwright/test';
test('checkout completes', async ({ page, interceptNetworkCall }) => { // Use interceptNetworkCall for cleaner network interception const checkoutCall = interceptNetworkCall({ method: 'POST', url: '**/api/checkout' });
await page.click('button[name="checkout"]');
// Wait for response (automatic JSON parsing) const { status, responseJson: order } = await checkoutCall;
// Validate API response expect(status).toBe(200); expect(order.status).toBe('confirmed');
// Validate UI await expect(page.locator('.confirmation')).toBeVisible();});Playwright Utils Benefits:
interceptNetworkCallfor cleaner network interception- Automatic JSON parsing (
responseJsonready to use) - No manual
await response.json() - Glob pattern matching (
**/api/checkout) - Cleaner, more maintainable code
For automatic error detection, use network-error-monitor fixture separately. See Integrate Playwright Utils.
Priority 3: P1 Requirements
Goal: Get P1 coverage to 80%+
Actions:1. Generate tests for highest-risk P1 gaps2. Improve test quality incrementallyCreate Improvement Roadmap
Section titled âCreate Improvement Roadmapâ# Test Improvement Roadmap
## Week 1: Critical Path (P0)- [ ] Add 5 missing P0 tests (Epic 1: Auth)- [ ] Fix 8 hard waits in auth tests- [ ] Verify P0 coverage = 100%
## Week 2: Flakiness- [ ] Replace all hard waits with network-first- [ ] Fix conditional flow control- [ ] Run burn-in loops (target: 0 failures in 10 runs)
## Week 3: High-Value Coverage (P1)- [ ] Add 10 missing P1 tests- [ ] Improve selector resilience- [ ] P1 coverage target: 80%
## Week 4: Quality Polish- [ ] Extract fixtures for common patterns- [ ] Add network assertions- [ ] Quality score target: 75+Phase 3: Incremental Improvement
Section titled âPhase 3: Incremental ImprovementâApply TEA workflows to new work while improving legacy tests.
For New Features (Greenfield Within Brownfield)
Section titled âFor New Features (Greenfield Within Brownfield)âUse full TEA workflow:
1. `test-design` (epic-level) - Plan tests for new feature2. `atdd` - Generate failing tests first (TDD)3. Implement feature4. `automate` - Expand coverage5. `test-review` - Ensure qualityBenefits:
- New code has high-quality tests from day one
- Gradually raises overall quality
- Team learns good patterns
For Bug Fixes (Regression Prevention)
Section titled âFor Bug Fixes (Regression Prevention)âAdd regression tests:
1. Reproduce bug with failing test2. Fix bug3. Verify test passes4. Run `test-review` on regression test5. Add to regression test suiteFor Refactoring (Regression Safety)
Section titled âFor Refactoring (Regression Safety)âBefore refactoring:
1. Run `trace` - Baseline coverage2. Note current coverage %3. Refactor code4. Run `trace` - Verify coverage maintained5. No coverage should decreasePhase 4: Continuous Improvement
Section titled âPhase 4: Continuous ImprovementâTrack improvement over time.
Quarterly Quality Audits
Section titled âQuarterly Quality AuditsâQ1 Baseline:
Coverage: 30%Quality Score: 55/100Flakiness: 15% fail rateQ2 Target:
Coverage: 50% (focus on P0)Quality Score: 65/100Flakiness: 5%Q3 Target:
Coverage: 70%Quality Score: 75/100Flakiness: 1%Q4 Target:
Coverage: 85%Quality Score: 85/100Flakiness: <0.5%Brownfield-Specific Tips
Section titled âBrownfield-Specific TipsâDonât Rewrite Everything
Section titled âDonât Rewrite EverythingâCommon mistake:
"Our tests are bad, let's delete them all and start over!"Better approach:
"Our tests are bad, let's:1. Keep tests that work (even if not perfect)2. Fix critical quality issues incrementally3. Add tests for gaps4. Gradually improve over time"Why:
- Rewriting is risky (might lose coverage)
- Incremental improvement is safer
- Team learns gradually
- Business value delivered continuously
Use Regression Hotspots
Section titled âUse Regression HotspotsâIdentify regression-prone areas:
## Regression Hotspots
**Based on:**- Bug reports (last 6 months)- Customer complaints- Code complexity (cyclomatic complexity >10)- Frequent changes (git log analysis)
**High-Risk Areas:**1. Authentication flow (12 bugs in 6 months)2. Checkout process (8 bugs)3. Payment integration (6 bugs)
**Test Priority:**- Add regression tests for these areas FIRST- Ensure P0 coverage before touching codeQuarantine Flaky Tests
Section titled âQuarantine Flaky TestsâDonât let flaky tests block improvement:
// Mark flaky tests with .skip temporarilytest.skip('flaky test - needs fixing', async ({ page }) => { // TODO: Fix hard wait on line 45 // TODO: Add network-first pattern});Track quarantined tests:
# Quarantined Tests
| Test | Reason | Owner | Target Fix Date || ------------------- | -------------------------- | -------- | --------------- || checkout.spec.ts:45 | Hard wait causes flakiness | QA Team | 2026-01-20 || profile.spec.ts:28 | Conditional flow control | Dev Team | 2026-01-25 |Fix systematically:
- Donât accumulate quarantined tests
- Set deadlines for fixes
- Review quarantine list weekly
Migrate One Directory at a Time
Section titled âMigrate One Directory at a TimeâLarge test suite? Improve incrementally:
Week 1: tests/auth/
1. Run `test-review` on auth tests2. Fix critical issues3. Re-review4. Mark directory as "modernized"Week 2: tests/api/
Same processWeek 3: tests/e2e/
Same processBenefits:
- Focused improvement
- Visible progress
- Team learns patterns
- Lower risk
Document Migration Status
Section titled âDocument Migration StatusâTrack which tests are modernized:
# Test Suite Status
| Directory | Tests | Quality Score | Status | Notes || ------------------ | ----- | ------------- | ------------- | -------------- || tests/auth/ | 15 | 85/100 | â
Modernized | Week 1 cleanup || tests/api/ | 32 | 78/100 | â ď¸ In Progress | Week 2 || tests/e2e/ | 28 | 62/100 | â Legacy | Week 3 planned || tests/integration/ | 12 | 45/100 | â Legacy | Week 4 planned |
**Legend:**- â
Modernized: Quality >80, no critical issues- â ď¸ In Progress: Active improvement- â Legacy: Not yet touchedCommon Brownfield Challenges
Section titled âCommon Brownfield ChallengesââWe Donât Know What Tests Coverâ
Section titled ââWe Donât Know What Tests CoverââProblem: No documentation, unclear what tests do.
Solution:
1. Run `trace` - TEA analyzes tests and maps to requirements2. Review traceability matrix3. Document findings4. Use as baseline for improvementTEA reverse-engineers test coverage even without documentation.
âTests Are Too Brittle to Touchâ
Section titled ââTests Are Too Brittle to TouchââProblem: Afraid to modify tests (might break them).
Solution:
1. Run tests, capture current behavior (baseline)2. Make small improvement (fix one hard wait)3. Run tests again4. If still pass, continue5. If fail, investigate why
Incremental changes = lower riskâNo One Knows How to Run Testsâ
Section titled ââNo One Knows How to Run TestsââProblem: Test documentation is outdated or missing.
Solution:
1. Document manually or ask TEA to help analyze test structure2. Create tests/README.md with: - How to install dependencies - How to run tests (npx playwright test, npm test, etc.) - What each test directory contains - Common issues and troubleshooting3. Commit documentation for teamNote: framework is for new test setup, not existing tests. For brownfield, document what you have.
âTests Take Hours to Runâ
Section titled ââTests Take Hours to RunââProblem: Full test suite takes 4+ hours.
Solution:
1. Configure parallel execution (shard tests across workers)2. Add selective testing (run only affected tests on PR)3. Run full suite nightly only4. Optimize slow tests (remove hard waits, improve selectors)
Before: 4 hours sequentialAfter: 15 minutes with sharding + selective testingHow ci helps:
- Scaffolds CI configuration with parallel sharding examples
- Provides selective testing script templates
- Documents burn-in and optimization strategies
- But YOU configure workers, test selection, and optimization
With Playwright Utils burn-in:
- Smart selective testing based on git diff
- Volume control (run percentage of affected tests)
- See Integrate Playwright Utils
âWe Have Tests But They Always Failâ
Section titled ââWe Have Tests But They Always FailââProblem: Tests are so flaky theyâre ignored.
Solution:
1. Run `test-review` to identify flakiness patterns2. Fix top 5 flaky tests (biggest impact)3. Quarantine remaining flaky tests4. Re-enable as you fix them
Don't let perfect be the enemy of goodBrownfield TEA Workflow
Section titled âBrownfield TEA WorkflowâRecommended Sequence
Section titled âRecommended Sequenceâ1. Documentation (if needed):
document-project2. Baseline (Phase 2):
trace Phase 1 - Establish coverage baselinetest-review - Establish quality baseline3. Planning (Phase 2-3):
prd - Document requirements (if missing)architecture - Document architecture (if missing)test-design (system-level) - Testability review4. Infrastructure (Phase 3):
framework - Modernize test framework (if needed)ci - Setup or improve CI/CD5. Per Epic (Phase 4):
test-design (epic-level) - Focus on regression hotspotsautomate - Add missing teststest-review - Ensure qualitytrace Phase 1 - Refresh coverage6. Release Gate:
nfr-assess - Validate NFRs (if enterprise)trace Phase 2 - Gate decisionRelated Guides
Section titled âRelated GuidesâWorkflow Guides:
- How to Run Trace - Baseline coverage analysis
- How to Run Test Review - Quality audit
- How to Run Automate - Fill coverage gaps
- How to Run Test Design - Risk assessment
Customization:
- Integrate Playwright Utils - Modernize tests with utilities
Understanding the Concepts
Section titled âUnderstanding the Conceptsâ- Engagement Models - Brownfield model explained
- Test Quality Standards - What makes tests good
- Network-First Patterns - Fix flakiness
- Risk-Based Testing - Prioritize improvements
Reference
Section titled âReferenceâ- TEA Command Reference - All 8 workflows
- TEA Configuration - Config options
- Knowledge Base Index - Testing patterns
- Glossary - TEA terminology
Generated with BMad Method - TEA (Test Architect)