Skip to content

Introduction to Agents

Build AI agents from first principles. Learn when to use them, how to build them, and how to run them in production.

What is Mashin?

Mashin is a governed intelligence substrate: a deterministic foundation for building, running, and governing intelligent software. It transforms AI from an unpredictable black box into a controlled, auditable reasoning system where cognitive processes run safely, predictably, and at enterprise scale.

At its core is a deterministic language, also called Mashin, that compiles to BEAM bytecode (the compiled format that runs on the Erlang virtual machine, known for fault tolerance and concurrency; the same runtime that powers WhatsApp and Discord).

If You’ve Used ChatGPT or Claude

You already understand the building blocks. Here’s how concepts you know map to Mashin:

What you knowMashin equivalent
Giving AI a persona (“You are a helpful assistant…”)with role (sets the AI’s role)
Writing a prompt (“Classify this text…”)with task (the instruction)
Getting structured JSON outputreturns (declares the shape of the response)
Using plugins or function callingTools in ask steps (governed actions the AI can take)
A Custom GPT or Claude ProjectA machine (a self-contained, reusable unit of work)
Chaining multiple prompts togetherA flow of ask steps
An AI that can search the web and use toolsAn agent (AI that decides what actions to take)

This course teaches you to build agents: AI systems that can reason, use tools, and make decisions. You don’t need to write all the code from scratch. Koda (mashin’s intelligent development environment) can generate machines from plain language descriptions. This course teaches you to read, understand, and customize what Koda builds.

The Language

Mashin files (.mashin) define machines — portable cognitive computers with typed inputs, outputs, and governed execution. Each machine carries its code, state, and history. Think of a machine as a recipe with superpowers: it declares what ingredients it needs (inputs), what dish it produces (outputs), and every cooking step is tracked, permission-controlled, and auditable.

A machine is made of steps, and each step has a type:

Step TypePurposeAnalogy
askReasoning (using an LLM) or invoking an effect machine, get structured outputAsking a chef for their opinion
computePure computation, no I/ODoing math or measuring ingredients
decideConditional routingChoosing which path to take
remember / recallStore and retrieve knowledgeWriting in and reading from a notebook
wait forSuspend and resumeWaiting for a delivery

Steps live inside implements. For multiple execution paths, use named flows.

The core design principle: code computes, machines effect. compute steps are pure (no I/O by construction). They can transform data but can’t touch the network, filesystem, or database. All side effects (actions that affect the outside world) go through effect machines: dedicated machines whose sole job is to perform a specific governed operation (like making a web request or reading a file). This means every action is auditable and permission-controlled.

The Platform

Beyond the language, the governed intelligence substrate includes:

  • Koda + Kits: The cognition layer. Koda is the intelligent development environment that helps you design, inspect, and govern machines. Describe what you want in plain language, and Koda generates working Mashin code. Every module in this course includes a Koda exercise. Kits is the governed reasoning framework underneath.
  • Visual Builder (Forge): A drag-and-drop interface for building machines visually. The visual representation and .mashin code stay in sync.
  • Runtime — The execution engine with governance, audit trails, and crash recovery. Machines run as isolated processes that automatically restart if something goes wrong.
  • Standard Library (stdlib) — Pre-built effect machines for common operations: @mashin/actions/http/* for web requests, @mashin/actions/tools/* for file operations and web search, @mashin/actions/file/* for filesystem access, and more.
  • Registry — Share and discover machines, like an app store for AI workflows. Machines use namespaced names like @mashin/actions/http/get (standard library) or @myorg/effects/slack_notify (your organization’s custom machines).

Key Terms

TermDefinition
MachineA portable cognitive computer defined in a .mashin file — carries its code, state, and history with typed inputs, outputs, and governed steps
StepA single operation within a machine (ask, compute, decide, remember, recall, wait for)
FlowA named sequence of steps under implements. Use flows for multiple named paths.
Effect machineA machine whose job is to perform a governed side effect (web request, file read, etc.) — keeps all actions auditable
StdlibThe standard library (@mashin/actions/*) — pre-built effect machines that ship with Mashin
LLMLarge language model — an AI model like Claude or GPT that generates text from prompts
AgentA machine where the AI decides what actions to take at runtime, rather than following a fixed flow
GovernedEvery step is tracked, permission-controlled, and auditable — Mashin records what ran, what it produced, and whether it was allowed to run
KodaMashin’s intelligent development environment. Describe what you want in plain language, and Koda generates a working machine

Reading MashinTalk Syntax

MashinTalk uses keyword-hierarchy syntax (indentation-based, no braces or do...end). Here’s a quick guide:

machine name "Display Name" A machine definition
ask name, using: "model" An AI reasoning step
compute name A pure computation step
accepts Declares what data the machine accepts
responds with Declares what data the machine returns
implements Where steps live
"text" A string
42, 3.14 Numbers (integer and decimal)
true, false Boolean values
{key: "value"} An object/map
input.field Read a value from the machine's inputs
steps.name.field Read a value from a previous step's output
state.field Read a value from the machine's persistent state

Don’t worry about memorizing this; it will become natural as you work through the modules.

A Quick Example

Here’s a complete machine that classifies text. Read the annotations (comments starting with //) to follow along:

machine classifier "Text Classifier"
accepts
text as string, is required // A text string (required)
responds with
category as string // A text string
confidence as decimal // A decimal number (0.0 to 1.0)
implements
ask classify, using: "anthropic:claude-haiku-4" // An AI reasoning step
with task "Classify this text into a category: ${input.text}"
returns
category as string, is required
confidence as decimal, is required

That’s a working machine: typed inputs, a governed AI call, and structured output. If you’ve used ChatGPT’s JSON mode or function calling, this is the same idea with governance built in. This course teaches you to build progressively more capable machines, from simple classifiers like this to full autonomous agents.

What You’ll Learn

By the end of this course, you’ll be able to:

  • Decide when a task needs an agent vs a simple workflow
  • Write ask steps with structured output (like ChatGPT’s JSON mode, but governed)
  • Give an AI tools and let it drive an action loop
  • Manage state and memory across interactions
  • Build full ReAct agents (Reason-Act-Observe loops) with tool dispatch
  • Compose multi-agent systems from specialist machines
  • Apply governance, error handling, and cost controls
  • Ask Koda to generate agents and understand what it produces

Prerequisites

  • Comfortable with AI tools like ChatGPT, Claude, or similar — you understand prompts, structured output, and basic automation concepts
  • Programming experience helpful but not required — Koda can generate machines for you; this course teaches you to read and customize them
  • mashin installed and running (Quickstart)
  • Access to an LLM provider (Anthropic or OpenAI API key configured)

How to Use This Course

Each module follows a pattern:

  1. Concepts — What you’re learning and why it matters, explained with analogies and diagrams
  2. Koda generates it: Ask Koda to build something, then study what it produced
  3. Understand the code — Walk through the machine line by line
  4. Key syntax — Quick reference for the patterns introduced
  5. Common mistakes — What goes wrong and how to fix it

Estimated time: 4-6 hours total (30-45 minutes per module)

The Complexity Ladder

Before diving in, internalize this principle. It guides the entire course:

Level 1: Linear A ──► B ──► C Steps are known, no branching
Level 2: Conditional A ──► if X? ──► B or C One decision point
Level 3: Iterative A ──► for each ──► B Process multiple items
Level 4: Resilient A ──► retry/fallback ──► B External calls may fail
Level 5: Agentic A ──► AI decides ──► ??? Task varies by input

Always start at Level 1. Most tasks don’t need agents. Modules 01-05 teach you Levels 1-4. Modules 06-08 teach Level 5.

Modules

#ModuleWhat You’ll Build
01What Are Agents?Mental model for agents vs workflows
02Your First Reasoning StepSentiment analyzer with structured output
03Tools, The Agent PrimitiveResearch assistant with web search
04State and MemoryStateful machine with persistent memory
05Control Flow and LoopsMulti-step pipeline with branching
06Building a ReAct AgentComplete ReAct agent with tool dispatch
07Composition and Multi-AgentCoordinator with specialist machines
08Production PatternsGovernance, error handling, cost controls

Reference Material

These pages are referenced throughout the course: