Lesson 9: Interpretation Modes
Your email triage is live, tested, and learning. Your manager asks: “What exactly does this thing do? How much does it cost? Is it compliant with our data handling policy?”
You could explain it yourself. Or you could ask mashin to analyze the machine and answer those questions directly.
Six Ways to Analyze a Machine
Type any of these in Koda:
/explain email_triageYou see a structured description: what the machine does step by step, what inputs it needs, what it outputs, what external systems it touches, what models it uses.
No need to read the code. The explanation is generated from the machine’s structure.
/cost email_triageYou see:
Estimated cost per run: $0.0003 classify: claude-haiku-4, ~200 tokens, $0.0003 route: decide (no cost) notify/task: effect machine (no AI cost)
At 150 runs/day: $0.045/day, $1.35/monthThe cost estimate walks through every step, identifies which ones call AI models, looks up current pricing, and projects daily and monthly costs.
/simulate email_triageRuns the machine with mocked effects. The AI steps return synthetic responses from your test cases. The Teams and Planner calls are simulated. You see the full execution flow without actually sending messages or creating tasks.
/evaluate email_triageRuns the machine against all its verifies test cases and reports:
4/4 tests passedAccuracy: 100%Average confidence: 0.85/verify email_triageRuns a governance compliance check:
[pass] Valid structure[pass] Capabilities declared[pass] Models on allowlist[pass] Step count within limits[pass] Governance sections present[pass] Derivation trust verifiedThis is the same check that runs before any machine is promoted to production. You can run it yourself anytime.
/improve email_triageProposes improvements: better prompts, missing edge cases, cost optimizations. Returns a diff showing what would change, without changing anything.
Using Modes in Code
Interpretation modes are not just slash commands. You can use them in machines:
machine pre_deploy_check
accepts machine_name as text, is required
responds with ready as boolean blockers as list
implements ask governance, from: email_triage, to: verify ask costs, from: email_triage, to: cost ask quality, from: email_triage, to: evaluate
compute assess { let blockers = [] let blockers = if (governance.is_compliant is not true) { [...blockers, "governance check failed"] } else { blockers } let blockers = if (costs.total_estimated_cost > 0.50) { [...blockers, "cost exceeds $0.50/run"] } else { blockers } let blockers = if (quality.accuracy < 0.9) { [...blockers, "accuracy below 90%"] } else { blockers } {ready: blockers.length == 0, blockers: blockers} }ask governance, from: email_triage, to: verify interprets the machine in verify mode. The to: modifier tells the runtime to analyze instead of execute.
This machine checks governance, cost, and quality automatically before any deployment.
Custom Modes
The six built-in modes cover common needs. For specialized analysis, write your own:
machine compliance_auditor
accepts target_form as text, is required target_name as text standard as text, default: "internal"
responds with compliant as boolean findings as list
implements ask audit, using: "anthropic:claude-sonnet-4" with role "You are a compliance auditor for AI systems." with task "Check this machine against ${input.standard} compliance standards.\n\nMachine: ${input.target_form}" returns compliant as boolean findings as listUse it as a custom interpretation mode:
ask audit_result, from: email_triage, to: compliance_auditor standard: "EU AI Act"The runtime passes your triage machine’s form and name to the auditor. The auditor analyzes it against the standard you specified.
The Bigger Picture
Interpretation modes exist because machines are structured data, not opaque code. The system can walk through a machine’s steps, read its models, check its governance, estimate its costs. All without running it.
This is what it means to have a governed platform: you can understand, audit, and verify any machine at any time.
What Comes Next
You have built, deployed, tested, and analyzed an email triage system. The final lesson goes deeper: how machines can inspect and improve themselves. Metaprogramming: the machine as a value that can be transformed.