Skip to content

ask ... using

ask … using

Send a task to a reasoning provider (LLM, symbolic planner, solver). The primary step type for AI-powered reasoning within a machine. Every ask step is governed: the runtime checks permissions, enforces token budgets, records the decision in the behavioral ledger, and mediates the call through the governance interpreter.

When to use

Use ask when you need:

  • Natural language understanding or generation
  • Classification, extraction, or summarization
  • Decision-making that requires judgment
  • Structured output from unstructured input

Use compute for deterministic calculations. Use call to invoke another machine. Use decide for rule-based branching.

Three variants

VariantSyntaxPurpose
ask ... usingask classify, using: "anthropic:claude-sonnet-4-6"Send a task to an LLM provider
ask ... fromask data, from: "@mashin/actions/http/get"Request data from an effect machine
ask ... ofask status, of: "monitor"Query a running machine’s state

This page covers ask ... using. See the linked pages for the other variants.

Syntax

ask <name>, using: "<provider>:<model>"
with task "<instruction>"
with role "<system prompt>"
returns
<field> as <type>
assuming
<field>: <mock value>

Configuration

ConfigRequiredDescription
usingYesProvider and model: "anthropic:claude-sonnet-4-6", "openai:gpt-4.1", "ollama:llama3"
with taskYesThe instruction sent to the model. Supports input.* and steps.* interpolation.
with roleNoSystem prompt. Sets the model’s persona or constraints.
returnsNoStructured output schema. Fields the model must return.
assumingNoMock values for testing. Used in test/simulate mode instead of calling the real model.
with toolsNoList of tool names the model can call during reasoning.
with temperatureNoSampling temperature (0.0 to 2.0). Lower = more deterministic.
with max_tokensNoMaximum tokens in the response.

Examples

Simple classification

machine sentiment
accepts
text as text, is required
responds with
sentiment as text
confidence as number
implements
ask classify, using: "anthropic:claude-sonnet-4-6"
with task "Classify the sentiment of this text as positive, negative, or neutral. Return a confidence score between 0 and 1."
with role "You are a sentiment analysis expert."
returns
sentiment as text
confidence as number
assuming
sentiment: "positive"
confidence: 0.95

Using input interpolation

ask summarize, using: "anthropic:claude-sonnet-4-6"
with task "Summarize this document in 3 bullet points: ${input.document}"
returns
bullets as list

Chaining with compute

machine email_triage
accepts
subject as text, is required
body as text, is required
implements
ask analyze, using: "anthropic:claude-sonnet-4-6"
with task "Analyze this email. Determine priority (high/medium/low), category, and whether it needs a response."
with role "You are an executive assistant triaging emails."
returns
priority as text
category as text
needs_response as boolean
suggested_action as text
assuming
priority: "medium"
category: "general"
needs_response: true
suggested_action: "Review and respond within 24 hours"
compute format_response
{
priority: steps.analyze.priority,
category: steps.analyze.category,
action: steps.analyze.needs_response
? "Respond: " + steps.analyze.suggested_action
: "No response needed",
triaged_at: now()
}

With tool use

ask research, using: "anthropic:claude-sonnet-4-6"
with task "Research this company and provide a brief summary"
with tools ["web_search", "read_url"]
returns
summary as text
sources as list

Providers

Supported provider prefixes:

ProviderPrefixExample Models
Anthropicanthropic:claude-sonnet-4-6, claude-haiku-4-5
OpenAIopenai:gpt-4.1, gpt-4.1-mini
Googlegoogle:gemini-2.5-pro, gemini-2.5-flash
Ollama (local)ollama:llama3, mistral, codellama
Groqgroq:llama-3.3-70b

The model is resolved at runtime through cell settings. If using is omitted, the cell’s default model is used (configured in Cell.Settings).

Governance

Every ask step is governed:

  1. Permission check: the machine must have reason capability
  2. Token budget: checked before the call; denied if budget would be exceeded
  3. Consent: in interactive mode, the user is asked to approve the LLM call
  4. Behavioral ledger: the call, its cost, tokens used, and model response are recorded
  5. Cost tracking: input/output tokens and estimated cost are tracked per execution

In test mode, assuming values are returned instead of calling the real model. This makes tests fast, deterministic, and free.

Testing with assuming

The assuming block provides mock return values for test and simulate modes:

ask classify, using: "anthropic:claude-sonnet-4-6"
with task "Classify this input"
returns
category as text
confidence as number
assuming
category: "technology"
confidence: 0.92

When the machine runs in test mode, assuming values are returned immediately without calling the model. This lets you:

  • Write deterministic tests
  • Run CI without API keys
  • Verify downstream logic without LLM variability

Translations

LanguageKeyword
Englishask
Spanishpregunta
Frenchdemande
Germanfrage
Japanese質問
Chinese
Korean질문

See also