Quickstart
Run a durable agent workflow that uses a remote MCP server, drafts a brief, pauses for human review, and saves the result.
The project is self-contained — no documents to bring, no local MCP server to run.
Time: ~10 minutes.
You’ll learn:
- Scaffold an agent project from a template
- Compose MCP tools and provider-hosted search inside an agent
- Pause a workflow for human review and resume on approval
- Inspect the run trace in Studio
Prerequisites:
- The
agnt5CLI installed and authenticated. See Install. - Python 3.12 or newer.
- An OpenAI API key from platform.openai.com, or mock mode (Step 2).
Set up with an AI coding assistant — paste this prompt into Claude Code, Cursor, Copilot, etc.
The assistant will scaffold and run the quickstart end-to-end.
Help me run the AGNT5 quickstart end-to-end.
Assume I have already installed the agnt5 CLI (agnt5 version works in a fresh terminal) and run agnt5 auth login. If not, point me back to https://docs.agnt5.com/docs/get-started/install first.
What to do:
1. Run agnt5 create my-investigator --template python/quickstart and cd into the new directory.
2. Ask me for OPENAI_API_KEY and write OPENAI_API_KEY="sk-..." to .env at the project root. If I do not have a key, write AGNT5_MOCK_MODE=1 to .env instead and tell me the agent draft will be a canned brief.
3. Run agnt5 dev in one terminal and confirm the worker reports "investigate_with_review" and "save_report" registered.
4. In a second terminal, run: agnt5 run investigate_with_review --input '{"question": "Should we migrate from Redis to Valkey?"}'
5. Wait for the workflow to pause for review, then point me to Studio to approve the brief.
6. After I approve, show the path of the saved report under .agnt5/reports/.
Critical rules:
- DO NOT modify workflows.py or functions.py on the first run. The quickstart works as shipped; edit only after the first successful run.
- The agent runs INSIDE the workflow. Do not call investigator.run() outside a @workflow function — checkpoints will not record.
- Tool selection is explicit. The agent gets exactly the tools listed in workflows.py: the DeepWiki MCP tools, plus provider-hosted web search by default, or the AGNT5 web_search tool if a search-provider key is set in env.
- There is no research_tools() helper. Keep tool selection visible.Step 1: Create the project
agnt5 create my-investigator --template python/quickstart
cd my-investigatorThe template lays down a runnable AGNT5 project:
my-investigator/
├── agnt5.yaml
├── app.py
├── pyproject.toml
├── README.md
└── src/agnt5_quickstart/
├── __init__.py
├── functions.py
└── workflows.py| File | Purpose |
|---|---|
workflows.py | The workflow that connects MCP, an agent, and human review |
functions.py | Durable side-effect functions such as save_report, plus canned_brief for mock mode |
app.py | Registers the workflow and function with AGNT5 |
agnt5.yaml | Project metadata for the CLI |
Step 2: Configure the environment
Add your model provider key:
echo 'OPENAI_API_KEY="sk-..."' > .envagnt5 dev loads .env automatically.
No OpenAI key? Set AGNT5_MOCK_MODE=1 in .env instead. The agent draft is replaced by canned_brief(question) — a deterministic stub that returns the same five-section brief shape — but the workflow itself, the MCP connection, the human review pause, and the save_report step still run end-to-end.
The default search path is OpenAI’s provider-hosted web search — no extra key. To swap in an AGNT5-owned search backed by Brave, Tavily, or SearXNG, add one of:
cat >> .env <<'EOF'
AGNT5_BRAVE_SEARCH_API_KEY="..."
# or AGNT5_TAVILY_API_KEY="..."
# or AGNT5_SEARXNG_URL="https://your-searxng-instance/"
EOFThe first key the workflow finds wins.
Step 3: Read the workflow
Open src/agnt5_quickstart/workflows.py. The annotated highlights:
@workflow
async def investigate_with_review(ctx: WorkflowContext, question: str) -> dict:
# 1. Connect to a remote MCP server. Tools are discovered AFTER connect,
# so the tool list is part of each run's recorded state — not module
# state captured at import time.
mcp = MCPClient(id="quickstart-deepwiki")
mcp.add_streamable_http_server("deepwiki", DEEPWIKI_URL)
await mcp.connect()
mcp_tools = mcp.get_tools() # list[Tool]
# 2. Build the tool list explicitly. No hidden bundles — what's in here
# is exactly what the agent sees.
tools = list(mcp_tools)
built_in_tools: list[BuiltInTool] = []
if (
os.getenv("AGNT5_BRAVE_SEARCH_API_KEY")
or os.getenv("AGNT5_TAVILY_API_KEY")
or os.getenv("AGNT5_SEARXNG_URL")
):
tools.insert(0, web_search(max_results=5)) # AGNT5 tool
else:
built_in_tools.append(BuiltInTool.WEB_SEARCH) # provider-hosted
# 3. Run the agent. context=ctx is REQUIRED for durability — without it,
# every model call replays from scratch on retry.
investigator = Agent(
name="quickstart_investigator",
model="openai/gpt-4o-mini",
instructions=INVESTIGATOR_PROMPT,
tools=tools,
built_in_tools=built_in_tools,
max_iterations=6,
)
result = await investigator.run(user_message=question, context=ctx)
draft = result.output # str
# 4. Pause for human review. Durable — the workflow is not in process
# memory while it waits. wait_for_user accepts input_type values
# "text", "approval", "select", or "multiselect".
decision = await ctx.wait_for_user(
question=f"Review the brief:\n\n{draft}",
input_type="select",
options=[
{"id": "approve", "label": "Approve"},
{"id": "edit", "label": "Edit before saving"},
{"id": "reject", "label": "Reject"},
],
)
# 5. Side effect through ctx.step so it's checkpointed once and
# skipped on replay.
saved = await ctx.step(save_report, question=question, brief=draft)
return {
"status": "approved",
"report_path": saved["path"],
"tool_count": len(tools) + len(built_in_tools),
}The full file in your project also handles edit / reject decisions, mock mode, and mcp.disconnect() in a finally. The five comments above mark the load-bearing lines.
How the agent loop works
AGNT5 drives the model→tool→model loop; you don’t write it.
- Each iteration is a model call followed by zero or more tool calls.
max_iterations=6caps the loop — at the cap,agent.runreturns the last assistant message inresult.outputrather than raising.context=ctxis required for durability. Each model call and each tool call is checkpointed throughctx, so a worker restart resumes from the last completed step instead of replaying the model call.resultis anAgentResultwith:output: str(the final assistant text),tool_calls: list[dict](every tool the agent invoked, withbuilt_in: truemarking provider-hosted tools), andhandoff_to: Optional[str](set when the agent transferred to another agent).
See Agents for the full surface.
Step 4: Start a dev session
agnt5 devLeave this running. AGNT5 registers the workflow and streams runs, events, tool calls, and human review pauses back to Studio.
You should see the components register:
Registered components: investigate_with_review, save_report
Worker connected
Watching project filesIf agnt5 dev exits with command not found, your shell hasn’t picked up the install — open a new terminal or source ~/.zshrc. If it exits with an authentication error, run agnt5 auth login.
Step 5: Run the workflow
In another terminal:
agnt5 run investigate_with_review --input '{
"question": "Should we migrate from Redis to Valkey?"
}'The run will:
- Connect to the DeepWiki MCP server over Streamable HTTP.
- Discover the DeepWiki tools (
read_wiki_structure,read_wiki_contents,ask_question). - Create the investigator agent with the explicit tool list.
- Read the
redis/redisandvalkey-io/valkeyrepos through DeepWiki. - Draft an investigation brief.
- Pause for review.
Sample brief excerpt
Answer: Default new services to Valkey 7.2.6+. Plan the swap on existing
Redis OSS 7.2 deployments at the next forced version bump.
Evidence:
- Redis Inc relicensed to RSALv2 / SSPLv1 in March 2024; the Linux
Foundation forked Valkey from Redis 7.2.4 under the original BSD-3-Clause.
- Valkey ships drop-in replacements for redis-server, redis-cli, and
redis-sentinel.
...
Open questions:
- Which managed-cache vendor does the infra team standardize on?
- Do any services use closed-source Redis modules?If the run fails at MCP connect, confirm the DeepWiki endpoint is reachable: curl -I https://mcp.deepwiki.com/mcp. DeepWiki is a public service; intermittent failures are usually transient.
Step 6: Approve in Studio
Open Studio at app.agnt5.com (or run agnt5 context show for a custom UI URL). Navigate to your project; the new run shows up at the top of the runs list, paused at the human review step. Click it open.
The review panel shows the drafted brief and the three options you defined in wait_for_user:
| Choice | Result |
|---|---|
| Approve | Saves the generated brief unchanged |
| Edit before saving | Triggers a second wait_for_user(input_type="text"); what you paste replaces the draft |
| Reject | Returns {"status": "rejected", "draft": draft} and ends the workflow |
The pause is durable. The workflow is not waiting in process memory — stop the worker, restart later, and the approval still works. See Build locally for the durability walkthrough.
After approval, the workflow saves the report and returns:
{
"status": "approved",
"report_path": ".agnt5/reports/investigation-2026-04-30T18-15-03.md",
"tool_count": 4
}tool_count is 4 because the agent received three DeepWiki tools plus one search tool (provider-hosted built-in by default; AGNT5’s web_search if a search-provider key is set). The report_path is relative to the worker’s working directory — your project root in agnt5 dev, the worker container in cloud.
Confirm the file landed:
cat .agnt5/reports/*.mdThings to watch for
- DO NOT import MCP tools at module top level.
mcp.get_tools()runs afterawait mcp.connect()inside the workflow body, so the tool list is part of each run’s recorded state. - Always use
await ctx.step(save_report, ...)for side effects you want checkpointed. A bareawait save_report(...)runs every replay.
(Two earlier rules — “agent must run inside a workflow” and “no research_tools() helper” — are stated where the code introduces them, in Step 3 and the agent loop section.)
What you built
A workflow that combines the AGNT5 primitives:
- A workflow orchestrating the investigation.
- An agent with explicit tools deciding which to call.
- An MCP client connecting to a remote server (DeepWiki) over Streamable HTTP.
- Provider-hosted web search as a zero-key default, with a key-based AGNT5 search tool as an opt-in alternative.
- Human review as a durable workflow pause, not a callback.
- A checkpointed step (
save_report) that runs only after approval.
What you did not write: the agent loop, retry logic for the MCP connection or tool calls, checkpointing for the model call / each tool call / the human review, a coordinator or journal or broker to host any of the above, hot reload, or trace collection across all of it.
The shape that carries forward:
Agent behavior belongs inside a durable workflow.
Next steps
- Build locally — local development loop, durability test, and what to inspect in Studio.
- Run in cloud — promote the same workflow to a managed environment.
- Improve with evals — turn this run into eval data and measure changes.