Platform Governance & Delivery v1.0.0

MCAF: ML/AI Delivery

Apply MCAF ML/AI delivery guidance for data exploration, feasibility, experimentation, testing, responsible AI, and operating ML systems. Use when the repo includes model training, inference, data science workflows, or ML-specific delivery planning.

Trigger On

  • the repo contains model training, inference, experimentation, or data-science workflow
  • ML work needs explicit process, testing, or responsible-AI guidance
  • delivery discussion is mixing product, data, and model concerns

Workflow

  1. Separate product assumptions, data assumptions, and model assumptions.
  2. Keep experimentation traceable and testable.
  3. Treat responsible AI, data quality, and ML-specific verification as first-class requirements.
  4. Load only the references that match the current ML stage.

Deliver

  • clearer ML/AI delivery guidance
  • better links between data, experimentation, verification, and responsible AI
  • docs that match how the ML system is built and validated

Validate

  • the active ML stage is explicit
  • experimentation and evaluation are traceable
  • responsible-AI and data-quality requirements are not bolted on at the end

Load References

Related skills

Review .NET changes for bugs, regressions, architectural drift, missing tests, incorrect async or disposal behavior, and platform-specific pitfalls before you approve or merge…

Adopt MCAF governance in a .NET repository with the right AGENTS.md layout, repo-native docs, skill installation, verification rules, and non-trivial task workflow.