Run Prism as a repeatable, policy-aware pipeline.

This guide focuses on two operational goals:

Provenance matters here because policy should be built on findings that are auditable, confidence-aware, and explicit about uncertainty.

Lane Split

Quick lane (single repository):

Platform lane (multi-team standard):

Step 1: Run A Baseline Scan Job

pip install -e .
prism role path/to/role --detailed-catalog -o README.generated.md

You should see: deterministic README output in CI artifacts.

Step 2: Enable Policy Enforcement

Enable strict checks once teams are remediated:

prism role path/to/role \
  --fail-on-unconstrained-dynamic-includes \
  --fail-on-yaml-like-task-annotations \
  -o README.generated.md

You should see: pipeline failure on policy violations instead of silent drift.

Equivalent config in .prism.yml:

scan:
  fail_on_unconstrained_dynamic_includes: true
  fail_on_yaml_like_task_annotations: true

Step 3: Emit Machine-Readable Outputs

prism role path/to/role -f json -o role_scan.json
prism role path/to/role --runbook-csv-output RUNBOOK.csv -o README.generated.md

Use JSON for analytics and CSV for runbook automation.

Why this matters:

Step 4: Integrate Feedback Inputs

Feedback-driven settings are supported via local file or API endpoint.

prism role path/to/role \
  --feedback-from-learn /path/to/feedback.json \
  -o README.generated.md
prism role path/to/role \
  --feedback-from-learn "https://learn.example.com/api/feedback?role=my_role" \
  -o README.generated.md

You should see: selected settings adjusted according to validated recommendations.

Step 5: Standardize Operational Policy

Policy-As-Code Loop

Use Prism outputs as enforcement inputs, not only reporting artifacts.

Example checks teams commonly automate:

These checks are implemented in CI policy scripts using scanner flags and JSON output fields.

Leadership checks to operationalize:

Reference