Platform Testing v0.1.0

Test Trait Tagging

Analyzes test suites and tags each test with a standardized set of traits (e.g., positive, negative, critical-path, boundary, smoke, regression). Use when the user wants to categorize, audit, or label tests with traits. Do not use for writing new tests, running tests, or migrating test frameworks.

Workflow

Step 1: Detect the test framework

Examine project files and source code to determine the framework — see the exp-dotnet-test-frameworks skill for the complete detection table (package references, test markers, assertion APIs, and skip annotations).

Step 2: Scan existing traits

Check which tests already have trait attributes:

| Framework | Existing Attribute | Example | |-----------|--------------------|---------| | MSTest | [TestCategory("...")] | [TestCategory("positive")] | | xUnit | [Trait("Category", "...")] | [Trait("Category", "positive")] | | NUnit | [Category("...")] | [Category("positive")] |

Record which tests already have tags to avoid duplication.

Step 3: Classify each test method

For each test method without traits, analyze:

  1. Method name -- names containing Invalid, Fail, Error, Throw, Reject, BadInput, Null, Negative suggest negative
  2. Assertion type -- Assert.ThrowsException, Assert.Throws, Should().Throw() suggest negative
  3. Input values -- null, "", 0, -1, int.MaxValue, int.MinValue, empty collections suggest boundary
  4. Setup complexity -- minimal setup with basic assertions suggests smoke; external dependencies suggest integration
  5. Comments and names -- references to issue numbers or "regression" / "bug" / "fix for #..." suggest regression
  6. Timing assertions -- Stopwatch, BenchmarkDotNet, elapsed-time checks suggest performance
  7. Feature centrality -- tests on primary public API entry points or critical user workflows suggest critical-path
  8. Security patterns -- validates auth, checks permissions, sanitizes input, tests for injection, handles tokens/secrets suggest security
  9. Parallel/async constructs -- Task.WhenAll, Parallel.ForEach, locks, SemaphoreSlim, ConcurrentDictionary, race condition names suggest concurrency
  10. Fault injection -- simulates failures, tests retries, timeouts, or circuit breakers suggest resilience
  11. State mutation -- deletes external records, drops resources, modifies shared/global state suggest destructive
  12. Full-stack flow -- test spans entry point through data layer to final response, covering a complete user scenario suggest end-to-end
  13. Config/settings -- loads configuration, tests missing keys, validates options, checks environment variables suggest configuration
  14. Known instability -- test has [Ignore]/[Skip] comments about flakiness, or names contain "flaky"/"intermittent" suggest flaky
  15. Default -- if the test verifies a normal success path, tag positive

When in doubt between positive and negative, read the assertion: if it asserts success -> positive; if it asserts failure -> negative.

Step 4: Apply trait attributes

Add the appropriate attribute to each test method. Place trait attributes on the line directly above or below the existing test attribute.

MSTest:

[TestMethod]
[TestCategory("negative")]
[TestCategory("boundary")]
public void Parse_NullInput_ThrowsArgumentNullException() { ... }

xUnit:

[Fact]
[Trait("Category", "positive")]
[Trait("Category", "critical-path")]
public void CreateOrder_ValidItems_ReturnsConfirmation() { ... }

NUnit:

[Test]
[Category("regression")]
[Category("negative")]
public void Calculate_OverflowInput_ReturnsError() // Fix for #1234
{ ... }

Step 5: Generate trait summary

After tagging, produce a summary table:

Related skills

Reference data for .NET test framework detection patterns, assertion APIs, skip annotations, setup/teardown methods, and common test smell indicators across MSTest, xUnit, NUnit…

Audits .NET test mock usage by tracing each mock setup through the production code's execution path to find dead, unreachable, redundant, or replaceable mocks.