Analyzes the variety and depth of assertions across .NET test suites.
Test Trait Tagging
Analyzes test suites and tags each test with a standardized set of traits (e.g., positive, negative, critical-path, boundary, smoke, regression). Use when the user wants to categorize, audit, or label tests with traits. Do not use for writing new tests, running tests, or migrating test frameworks.
Workflow
Step 1: Detect the test framework
Examine project files and source code to determine the framework — see the exp-dotnet-test-frameworks skill for the complete detection table (package references, test markers, assertion APIs, and skip annotations).
Step 2: Scan existing traits
Check which tests already have trait attributes:
| Framework | Existing Attribute | Example | |-----------|--------------------|---------| | MSTest | [TestCategory("...")] | [TestCategory("positive")] | | xUnit | [Trait("Category", "...")] | [Trait("Category", "positive")] | | NUnit | [Category("...")] | [Category("positive")] |
Record which tests already have tags to avoid duplication.
Step 3: Classify each test method
For each test method without traits, analyze:
- Method name -- names containing
Invalid,Fail,Error,Throw,Reject,BadInput,Null,Negativesuggestnegative - Assertion type --
Assert.ThrowsException,Assert.Throws,Should().Throw()suggestnegative - Input values --
null,"",0,-1,int.MaxValue,int.MinValue, empty collections suggestboundary - Setup complexity -- minimal setup with basic assertions suggests
smoke; external dependencies suggestintegration - Comments and names -- references to issue numbers or "regression" / "bug" / "fix for #..." suggest
regression - Timing assertions --
Stopwatch,BenchmarkDotNet, elapsed-time checks suggestperformance - Feature centrality -- tests on primary public API entry points or critical user workflows suggest
critical-path - Security patterns -- validates auth, checks permissions, sanitizes input, tests for injection, handles tokens/secrets suggest
security - Parallel/async constructs --
Task.WhenAll,Parallel.ForEach, locks,SemaphoreSlim,ConcurrentDictionary, race condition names suggestconcurrency - Fault injection -- simulates failures, tests retries, timeouts, or circuit breakers suggest
resilience - State mutation -- deletes external records, drops resources, modifies shared/global state suggest
destructive - Full-stack flow -- test spans entry point through data layer to final response, covering a complete user scenario suggest
end-to-end - Config/settings -- loads configuration, tests missing keys, validates options, checks environment variables suggest
configuration - Known instability -- test has
[Ignore]/[Skip]comments about flakiness, or names contain "flaky"/"intermittent" suggestflaky - Default -- if the test verifies a normal success path, tag
positive
When in doubt between positive and negative, read the assertion: if it asserts success -> positive; if it asserts failure -> negative.
Step 4: Apply trait attributes
Add the appropriate attribute to each test method. Place trait attributes on the line directly above or below the existing test attribute.
MSTest:
[TestMethod]
[TestCategory("negative")]
[TestCategory("boundary")]
public void Parse_NullInput_ThrowsArgumentNullException() { ... }
xUnit:
[Fact]
[Trait("Category", "positive")]
[Trait("Category", "critical-path")]
public void CreateOrder_ValidItems_ReturnsConfirmation() { ... }
NUnit:
[Test]
[Category("regression")]
[Category("negative")]
public void Calculate_OverflowInput_ReturnsError() // Fix for #1234
{ ... }
Step 5: Generate trait summary
After tagging, produce a summary table:
Related skills
Reference data for .NET test framework detection patterns, assertion APIs, skip annotations, setup/teardown methods, and common test smell indicators across MSTest, xUnit, NUnit…
Audits .NET test mock usage by tracing each mock setup through the production code's execution path to find dead, unreachable, redundant, or replaceable mocks.