Run your first agent
This walks through running an agent — a body that calls an LLM with bash,
read_file, and write_file tools — inside a microVM. The example ships in
three flavors: Anthropic Claude, OpenAI, and Google Gemini. The flow is
identical; only the example folder and the API key env var change.
If you just want to see microagent boot a VM and run a command, start with run your first microVM.
Before you start
Section titled “Before you start”-
Install microagent and run
microagent doctor. -
Pick a provider and set the matching API key:
Provider Example folder API key env var Sign up Anthropic Claude examples/minimal-bodyANTHROPIC_API_KEYconsole.anthropic.com OpenAI examples/minimal-body-openaiOPENAI_API_KEYplatform.openai.com Google Gemini examples/minimal-body-geminiGEMINI_API_KEYaistudio.google.com -
Clone the microagent repo to get the example sources:
Terminal window git clone https://github.com/geoffbelknap/microagent.gitcd microagent
The rest of this page uses the Anthropic example. To follow along with
OpenAI or Gemini instead, swap minimal-body for minimal-body-openai or
minimal-body-gemini in every command, and use the matching API key env var.
Create the workspace
Section titled “Create the workspace”microagent create \ --file examples/minimal-body/microagent.yaml \ --env ANTHROPIC_API_KEY=$ANTHROPIC_API_KEYThe spec sets the workspace name to minimal-body — that’s what the rest of
the commands refer to. First-time create takes a minute or two: microagent
pulls the base Python image, builds the rootfs, installs Pydantic and the
Anthropic SDK, and copies the body source in. The API key is passed in as an
env var so it stays out of the spec file.
Send a request
Section titled “Send a request”The body reads requests from /workspace/input.json. Drop the first one in
with microagent cp:
microagent cp examples/minimal-body/demo/input-001.json minimal-body:/workspace/input.jsonThe request asks for a concrete task — write a Python script, run it, show the output.
Run it
Section titled “Run it”microagent start minimal-bodymicroagent --json result minimal-bodyThe body boots, calls the LLM with bash / read_file / write_file
tools, runs the tool calls inside /workspace, and writes a result. You’ll
see the LLM’s summary in the result’s content field.
The file the LLM wrote is still on the workspace’s disk. Pull it out:
microagent cp minimal-body:/workspace/hello.py ./hello.pycat ./hello.pyHalt, ask a follow-up, resume
Section titled “Halt, ask a follow-up, resume”The workspace persists between starts — disk, files, all of it. Halt cleanly, drop in a new request, start again. The LLM can read whatever it wrote on the previous run.
microagent halt minimal-bodymicroagent cp examples/minimal-body/demo/input-002.json minimal-body:/workspace/input.jsonmicroagent start minimal-bodymicroagent --json result minimal-bodyThe second request asks the LLM to read /workspace/hello.py and explain it.
The file is still there from the first run.
Clean up
Section titled “Clean up”microagent halt minimal-bodymicroagent delete minimal-bodydelete removes the workspace record and its disk.
What’s next
Section titled “What’s next”- Build a simple agent — the same flow with more on the body’s structure, prompt caching, and the production-shape gaps (mediation channel, host-side proxy for keys).
microagent.yaml— the full workspace spec reference.- State and identity — what
microagent --json statusreports and how lifecycle events are emitted. - Glossary — workspace, mediation, halt vs stop vs kill vs quarantine.