Test-Driven Development (TDD)
Mandatory Testing Policy
Every new feature or bug fix MUST be accompanied by unit tests. No exceptions. This policy ensures:- Code quality and reliability
- Prevention of regressions
- Documentation through tests
- Easier refactoring and maintenance
TDD Workflow
- Write Test First: Write failing test that describes desired behavior
- Implement Feature: Write minimal code to make test pass
- Refactor: Improve code while keeping tests passing
- Repeat: Continue cycle for each new feature or fix
Running Tests
Backend Tests
From thebackend/ directory:
Frontend Tests
From thefrontend/ directory:
CI/CD Tests
Every pull request automatically runs:- All backend unit tests
- Backend regression tests:
tests/test_provisioner_kubeconfig.pytests/test_docker_sandbox_mode_detection.py
- Linting checks
- Type checking
.github/workflows/backend-unit-tests.yml for the full CI configuration.
Test Structure
Directory Organization
Test Naming Convention
- Test files:
test_<feature>.py - Test classes:
TestFeatureNameorTest<Component><Aspect> - Test methods:
test_<behavior_being_tested>
Test File Structure
Writing Effective Tests
Unit Test Best Practices
1. Test One Thing at a Time
2. Use Descriptive Test Names
3. Use Fixtures for Shared Setup
4. Mock External Dependencies
5. Test Edge Cases and Error Conditions
6. Test Happy Path and Failure Cases
Integration Test Guidelines
Live Tests
Live tests (liketest_client_live.py) require actual configuration:
Gateway Conformance Tests
Ensure client responses match Gateway API schemas:Testing Patterns
Pattern 1: Configuration Testing
Pattern 2: Subprocess Testing
Pattern 3: File System Testing
Pattern 4: Mocking Module Imports
For circular import issues, useconftest.py:
Test Coverage
Measuring Coverage
Coverage Goals
- Critical paths: 100% coverage (auth, data integrity)
- Core functionality: 80%+ coverage
- Utility functions: 70%+ coverage
- UI components: 60%+ coverage
Coverage Best Practices
- Focus on code paths, not just line coverage
- Test error handling paths
- Test boundary conditions
- Don’t chase 100% for trivial code
- Use coverage to find gaps, not as only metric
Continuous Integration
GitHub Actions Workflow
The CI workflow runs on every pull request:Pre-commit Hooks
Set up pre-commit hooks to run tests locally:Debugging Failed Tests
Verbose Output
Run Specific Tests
Use pytest Debugger
Add Debug Output
Test Maintenance
Keep Tests Fast
- Mock expensive operations (network, disk)
- Use in-memory databases for data tests
- Avoid sleep() calls when possible
- Run slow tests separately with markers
Keep Tests Independent
- Each test should run in isolation
- Don’t rely on test execution order
- Clean up resources in fixtures
- Use
tmp_pathfor file operations
Update Tests with Code Changes
- Modify tests when refactoring code
- Add tests for new features
- Remove tests for deleted features
- Keep test documentation current