Review .NET changes for bugs, regressions, architectural drift, missing tests, incorrect async or disposal behavior, and platform-specific pitfalls before you approve or merge…
MCAF: ML/AI Delivery
Apply MCAF ML/AI delivery guidance for data exploration, feasibility, experimentation, testing, responsible AI, and operating ML systems. Use when the repo includes model training, inference, data science workflows, or ML-specific delivery planning.
Trigger On
- the repo contains model training, inference, experimentation, or data-science workflow
- ML work needs explicit process, testing, or responsible-AI guidance
- delivery discussion is mixing product, data, and model concerns
Workflow
- Separate product assumptions, data assumptions, and model assumptions.
- Keep experimentation traceable and testable.
- Treat responsible AI, data quality, and ML-specific verification as first-class requirements.
- Load only the references that match the current ML stage.
Deliver
- clearer ML/AI delivery guidance
- better links between data, experimentation, verification, and responsible AI
- docs that match how the ML system is built and validated
Validate
- the active ML stage is explicit
- experimentation and evaluation are traceable
- responsible-AI and data-quality requirements are not bolted on at the end
Load References
- read
references/ml-ai-projects.mdfirst - open
references/data-exploration.md,references/feasibility-studies.md,references/ml-fundamentals-checklist.md,references/model-experimentation.md,references/testing-data-science-and-mlops-code.md,references/responsible-ai.md, orreferences/ml-model-checklist.mdonly when that stage is active
Related skills
Adopt MCAF governance in a .NET repository with the right AGENTS.md layout, repo-native docs, skill installation, verification rules, and non-trivial task workflow.
Apply MCAF agile-delivery guidance for backlog quality, roles, ceremonies, and engineering feedback.