Reference data for .NET test framework detection patterns, assertion APIs, skip annotations, setup/teardown methods, and common test smell indicators across MSTest, xUnit, NUnit…
Assertion Diversity Analysis
Analyzes the variety and depth of assertions across .NET test suites. Use when the user asks to evaluate assertion quality, find shallow testing, identify tests with only trivial assertions, measure assertion coverage diversity, or audit whether tests verify different facets of correctness. Produces metrics and actionable recommendations. Works with MSTest, xUnit, NUnit, and TUnit. DO NOT USE FOR: writing new tests (use writing-mstest-tests), detecting anti-patterns (use test-anti-patterns), or fixing existing assertions.
Workflow
Step 1: Gather the test code
Read all test files the user provides. If the user points to a directory or project, scan for all test files — see the exp-dotnet-test-frameworks skill for framework-specific markers.
Step 2: Classify every assertion
For each test method, identify all assertions and classify them into these categories:
| Category | Examples | What it verifies | |----------|---------|-----------------| | Equality | Assert.AreEqual, Assert.Equal, Is.EqualTo | Return value matches expected | | Boolean | Assert.IsTrue, Assert.IsFalse, Assert.True | Condition holds | | Null checks | Assert.IsNull, Assert.IsNotNull, Assert.NotNull | Presence/absence of value | | Exception | Assert.ThrowsException, Assert.Throws, Assert.ThrowsAsync | Error handling behavior | | Type checks | Assert.IsInstanceOfType, Assert.IsAssignableFrom | Runtime type correctness | | String | StringAssert.Contains, StringAssert.StartsWith, Assert.Matches | Text content and format | | Collection | CollectionAssert.Contains, Assert.Contains, Assert.All, Has.Member | Collection contents and structure | | Comparison | Assert.IsTrue(x > y), Assert.InRange, Is.GreaterThan | Ordering and magnitude | | Approximate | Assert.AreEqual(expected, actual, delta), Is.EqualTo().Within() | Floating-point or tolerance-based | | Negative | Assert.AreNotEqual, Assert.DoesNotContain, Assert.DoesNotThrow | What should NOT happen | | State/Side-effect | Assertions on object properties after mutation, verifying mock calls | State transitions and side effects | | Structural/Deep | Assertions on nested properties, serialized forms, complex objects | Deep object correctness |
A single assertion can belong to multiple categories (e.g., Assert.AreNotEqual is both Equality and Negative).
Step 3: Compute metrics
Calculate these metrics for the test suite:
#### Per-test metrics
- Assertion count: Number of assertions in each test method
- Assertion categories: Which categories each test uses
#### Suite-wide metrics
- Average assertions per test: Total assertions / total test methods
- Assertion type spread: Number of distinct assertion categories used across the suite (out of 12)
- Tests with zero assertions: Count and percentage of test methods with no assertions at all
- Tests with only trivial assertions: Count and percentage of tests where every assertion is only a null check or
Assert.IsTrue(true)— trivial means no meaningful value verification - Tests with negative assertions: Count and percentage (target: at least 10% of tests should verify what should NOT happen)
- Tests with exception assertions: Count and percentage
- Tests with state/side-effect assertions: Count and percentage
- Tests with structural/deep assertions: Count and percentage
- Single-category tests: Count and percentage of tests that use only one assertion category
Step 4: Apply calibration rules
Before reporting, calibrate findings:
- Trivial means truly trivial.
Assert.IsNotNull(result)alone is trivial. ButAssert.IsNotNull(result)followed byAssert.AreEqual(expected, result.Value)is not — the null check is a guard before the real assertion. Only flag a test as "trivial" if it has no meaningful value assertions. - Boolean assertions checking meaningful conditions are not trivial.
Assert.IsTrue(result.IsValid)checks a specific property — it's a Boolean assertion, not a trivial one.Assert.IsTrue(true)is trivial. - Consider the test's intent. A test for a void method that verifies state change on a dependency is legitimate even if it only uses
Assert.IsTrue. - Exception tests are inherently low-assertion-count.
Assert.ThrowsException<T>(() => ...)may be the only assertion — that's fine for exception-focused tests. Don't penalize them for low assertion count. - Don't conflate diversity with volume. A test with 20
Assert.AreEqualcalls has high volume but low diversity. A test with one equality, one null check, and one exception assertion has low volume but good diversity. - If assertions are well-diversified, say so. A report concluding the suite has good diversity is perfectly valid.
Step 5: Report findings
Present the analysis in this structure:
- Summary Dashboard — A quick-reference table of key metrics:
`` | Metric | Value | Assessment | |-------------------------------|--------|------------| | Total tests | 25 | — | | Average assertions per test | 2.4 | Moderate | | Assertion type spread | 5/12 | Low | | Tests with zero assertions | 3 (12%)| Concerning | | Tests with only trivial asserts | 4 (16%)| Acceptable | | Tests with negative assertions | 2 (8%) | Below target | | Single-category tests | 15 (60%)| High | ``
- Category Breakdown — For each assertion category, show:
- How many tests use it - Representative examples from the code - Whether it's overused or underused relative to the code under test
- Gap Analysis — Based on the production code (if available), identify:
- Behaviors that are tested but only with equality checks - Error paths with no exception assertions - State-changing methods with no state verification - Collections returned but never checked for contents
- Recommendations — Prioritized list of improvements:
- Which tests would benefit most from additional assertion types - Which assertion categories are missing and why they matter - Concrete examples of assertions that could be added
- Assertion-free tests — If any exist, list each one with its method name and what it appears to be testing, so the user can decide whether to add assertions or mark them as intentional smoke tests.
Related skills
Audits .NET test mock usage by tracing each mock setup through the production code's execution path to find dead, unreachable, redundant, or replaceable mocks.
Performs pseudo-mutation analysis on .NET production code to find gaps in existing test suites.