ask ... using
ask … using
Send a task to a reasoning provider (LLM, symbolic planner, solver). The primary step type for AI-powered reasoning within a machine. Every ask step is governed: the runtime checks permissions, enforces token budgets, records the decision in the behavioral ledger, and mediates the call through the governance interpreter.
When to use
Use ask when you need:
- Natural language understanding or generation
- Classification, extraction, or summarization
- Decision-making that requires judgment
- Structured output from unstructured input
Use compute for deterministic calculations. Use call to invoke another machine. Use decide for rule-based branching.
Three variants
| Variant | Syntax | Purpose |
|---|---|---|
ask ... using | ask classify, using: "anthropic:claude-sonnet-4-6" | Send a task to an LLM provider |
ask ... from | ask data, from: "@mashin/actions/http/get" | Request data from an effect machine |
ask ... of | ask status, of: "monitor" | Query a running machine’s state |
This page covers ask ... using. See the linked pages for the other variants.
Syntax
ask <name>, using: "<provider>:<model>" with task "<instruction>" with role "<system prompt>" returns <field> as <type> assuming <field>: <mock value>Configuration
| Config | Required | Description |
|---|---|---|
using | Yes | Provider and model: "anthropic:claude-sonnet-4-6", "openai:gpt-4.1", "ollama:llama3" |
with task | Yes | The instruction sent to the model. Supports input.* and steps.* interpolation. |
with role | No | System prompt. Sets the model’s persona or constraints. |
returns | No | Structured output schema. Fields the model must return. |
assuming | No | Mock values for testing. Used in test/simulate mode instead of calling the real model. |
with tools | No | List of tool names the model can call during reasoning. |
with temperature | No | Sampling temperature (0.0 to 2.0). Lower = more deterministic. |
with max_tokens | No | Maximum tokens in the response. |
Examples
Simple classification
machine sentiment accepts text as text, is required
responds with sentiment as text confidence as number
implements ask classify, using: "anthropic:claude-sonnet-4-6" with task "Classify the sentiment of this text as positive, negative, or neutral. Return a confidence score between 0 and 1." with role "You are a sentiment analysis expert." returns sentiment as text confidence as number assuming sentiment: "positive" confidence: 0.95Using input interpolation
ask summarize, using: "anthropic:claude-sonnet-4-6" with task "Summarize this document in 3 bullet points: ${input.document}" returns bullets as listChaining with compute
machine email_triage accepts subject as text, is required body as text, is required
implements ask analyze, using: "anthropic:claude-sonnet-4-6" with task "Analyze this email. Determine priority (high/medium/low), category, and whether it needs a response." with role "You are an executive assistant triaging emails." returns priority as text category as text needs_response as boolean suggested_action as text assuming priority: "medium" category: "general" needs_response: true suggested_action: "Review and respond within 24 hours"
compute format_response { priority: steps.analyze.priority, category: steps.analyze.category, action: steps.analyze.needs_response ? "Respond: " + steps.analyze.suggested_action : "No response needed", triaged_at: now() }With tool use
ask research, using: "anthropic:claude-sonnet-4-6" with task "Research this company and provide a brief summary" with tools ["web_search", "read_url"] returns summary as text sources as listProviders
Supported provider prefixes:
| Provider | Prefix | Example Models |
|---|---|---|
| Anthropic | anthropic: | claude-sonnet-4-6, claude-haiku-4-5 |
| OpenAI | openai: | gpt-4.1, gpt-4.1-mini |
google: | gemini-2.5-pro, gemini-2.5-flash | |
| Ollama (local) | ollama: | llama3, mistral, codellama |
| Groq | groq: | llama-3.3-70b |
The model is resolved at runtime through cell settings. If using is omitted, the cell’s default model is used (configured in Cell.Settings).
Governance
Every ask step is governed:
- Permission check: the machine must have
reasoncapability - Token budget: checked before the call; denied if budget would be exceeded
- Consent: in interactive mode, the user is asked to approve the LLM call
- Behavioral ledger: the call, its cost, tokens used, and model response are recorded
- Cost tracking: input/output tokens and estimated cost are tracked per execution
In test mode, assuming values are returned instead of calling the real model. This makes tests fast, deterministic, and free.
Testing with assuming
The assuming block provides mock return values for test and simulate modes:
ask classify, using: "anthropic:claude-sonnet-4-6" with task "Classify this input" returns category as text confidence as number assuming category: "technology" confidence: 0.92When the machine runs in test mode, assuming values are returned immediately without calling the model. This lets you:
- Write deterministic tests
- Run CI without API keys
- Verify downstream logic without LLM variability
Translations
| Language | Keyword |
|---|---|
| English | ask |
| Spanish | pregunta |
| French | demande |
| German | frage |
| Japanese | 質問 |
| Chinese | 问 |
| Korean | 질문 |
See also
- ask … from - Request data from effect machines
- ask … of - Query running machine state
- compute - Pure computation steps
- implements - Section where steps live
- Governance reference - Permission and governance rules