setup-agents

Testing method

Error-Driven Development

Test errors first. Let the happy path emerge naturally.

Unit Tests

105

Passing

Error Scenarios

100%

Catalogued before coding

Coverage Target

90%

Per module

Test Layers

3

Unit · Integration · E2E

What is EDD?

Error-Driven Development is a proactive testing philosophy that inverts the conventional order: instead of writing the happy path first and adding error handling later, you catalog all anticipated error scenarios before writing any implementation code.

This isn't just error handling — it's a design technique. When you think about what can go wrong first, the correct structure for the happy path becomes obvious.

Traditional Approach

  1. Write happy-path implementation
  2. Manually test it works
  3. Add error handling reactively
  4. Write tests for what was built
  5. Discover edge cases in production

EDD Approach

  1. Catalog all anticipated errors
  2. Write tests for each error scenario
  3. Write tests for the happy path
  4. Implement until all tests pass
  5. Edge cases were designed in, not discovered

EDD in setup-agents

Every module in this plugin was built error-first. Below is the error catalog that drove the test suite before a single line of implementation was written.

sf setup-agents local — Error Catalog

File Already Exists

When a rule file exists and --force is not set, the write is skipped and a warning is emitted. The file is never silently overwritten.

No Tools Detected

When no tool indicators (.cursor/, .vscode/, etc.) are found, all 5 tools are configured as a safe default. Never returns empty.

Non-interactive Mode

When process.stdin.isTTY is false (CI, scripts), the profile defaults to developer with a warning instead of hanging on an interactive prompt.

Invalid Profile ID

Unknown profile IDs in --profile are silently filtered. Valid profiles proceed; the command never throws on partial input.

Non-Salesforce Project

Without sfdx-project.json, salesforce-standards.mdc and Agentforce workflows are not generated — no partial/incorrect output.

Cursor Scope User

When user-level scope is selected, no files are written to .cursor/rules/. Content is printed to console for manual paste. Base files (agent-guidelines.mdc, sub-agent-protocol.mdc) are still written.

sf setup-agents mcp — Error Catalog

No Org in Non-interactive Mode

When --target-org is omitted and stdin is not a TTY, a warning is emitted and an empty result is returned. Never hangs.

Corrupt mcp.json

If the existing .cursor/mcp.json cannot be parsed, a warning is emitted and the file is recreated from scratch rather than throwing.

sf org list Failure

If sf org list fails (no CLI, no auths), the error is caught, a warning is emitted, and the command falls back to "no orgs" state cleanly.

Merging Existing Servers

Adding a new org never overwrites existing mcpServers entries. The file is always read, merged, then written atomically.

sf setup-agents update — Error Catalog

Nothing to Update

When all files have the current version token, the command exits early with an informative message rather than running unnecessarily.

Missing Version Token

Files with no version token (manually created, or very old) are treated as stale and included in the update set.

Stale Workflows

.a4drules/workflows/*.md are scanned separately from .a4drules/*.md. Forgetting the subdirectory would leave workflow files perpetually outdated.

No Profiles Inferable

When no profile .mdc files exist in .cursor/rules/, the update defaults to developer profile rather than failing.

The EDD Process

  1. Catalog anticipated errors

    Before writing any implementation, list every edge case, invalid input, and failure mode for the module. Group by category (file I/O, user input, environment, integration).

  2. Write error scenario tests

    For each cataloged error, write a test that asserts the correct behaviour: warning emitted, empty result returned, file not modified, etc.

  3. Write happy-path tests

    With error cases covered, the happy path tests are minimal and focused. They assert positive outcomes against an already-hardened design.

  4. Implement until tests pass

    Write the implementation guided by the test suite. The error catalog ensures guard clauses and safe defaults are built in from the start.

  5. One assertion per test

    Each test asserts exactly one observable outcome using Chai's expect. This makes failures immediately diagnosable without reading through multiple assertions.

Comparison with Other Methodologies

Aspect Traditional TDD BDD EDD (this project)
Test order Happy path → edge cases Behaviour scenarios Error catalog → happy path
Error timing After implementation In scenario definition Before any code
Coverage Encountered errors only Described behaviours All anticipated errors
Design benefit Forces small functions Forces shared vocabulary Forces safe defaults from the start
CI speed Full suite always Full suite always Error tests first (fast feedback loop)

Test Structure

test/
├── commands/
│   └── setup-agents/
│       ├── local.test.ts    ← 89 tests (detection, profiles, agentforce workflows)
│       ├── mcp.test.ts      ← 12 tests (global flag, merge, non-interactive)
│       └── update.test.ts   ← 11 tests (stale detection, workflows subdirectory)
└── unit/
    ├── generators/
    │   ├── mdc-generator.test.ts
    │   ├── claude-generator.test.ts
    │   ├── workflow-generator.test.ts
    │   └── shared.test.ts
    └── setup/
        └── cursor-setup.test.ts  ← 9 tests (user scope path)

Run the full test suite:

node --loader ts-node/esm --no-warnings=ExperimentalWarning \
  ./node_modules/mocha/bin/mocha.js "test/**/*.test.ts" --timeout 30000

See the architecture that makes EDD tractable

Pure generators and dependency injection make every error scenario testable without complex mocking.