v0.2.0 — Apache 2.0

Install. Run.
Aar: tiny agent with no baggage.

Aar is a ready-to-use lean AI agent with chat, TUI, and web interfaces. Built-in tools, persistent sessions, and MCP support — including GitLab. Pick your provider. Start talking. It just works.

$ git clone https://github.com/fischerf/aar && cd aar && pip install -e ".[all]" copy
3
Interfaces
5
Built-in Tools
4
Providers
0
Vendor Lock-in
Not a framework demo. A working agent.
Install, pick a provider, and go. Interactive chat, a polished TUI with token counting, an HTTP API with SSE streaming, or one-shot batch runs. All with persistent, resumable sessions.
aar chat

Interactive Chat

Multi-turn conversations with tool use, session save, and resume

aar tui

Rich TUI

Markdown rendering, tool call panels, live token counter, status bar

aar serve

Web API

HTTP + SSE streaming. POST to chat, subscribe to events in real-time

aar run

One-Shot

Single task execution. Perfect for scripts, CI pipelines, and automation

terminal
$ git clone https://github.com/fischerf/aar && cd aar
$ pip install -e ".[anthropic,mcp]"
$ aar chat --mcp-config tools/mcp_config.json
Session a3f8c1 started · Claude Sonnet · 5 built-in + 7 MCP tools loaded

you › List my open GitLab issues and read the latest one

  ▸ calling gitlab_list_my_issues (0.3s)
  ▸ calling gitlab_read_issue (0.2s)

Found 3 open issues. The latest is #42 — Fix pagination in API ...
bash
read_file
write_file
edit_file
list_directory
+ any MCP server
Also a framework. A tiny one.
Aar is a working agent — but it's also a framework you can build on. The entire core loop fits in your head and on a single screen. No magic. No bloat.
~80
lines of code
Call LLM
Parse Tools
Execute
Feed Back

Readable. Debuggable. Yours.

Most frameworks bury their runtime under layers of abstraction. Aar does the opposite — the core loop is ~80 lines of plain Python. Every LLM call, every tool execution, every event is visible and timed.

while not done and step < max_steps:
  response = await provider.complete(messages, tools)

  if response.tool_calls:
    results = await executor.execute(tool_calls)
    session.append(results)
    continue

  session.append(response)  # done
Everything you need. Nothing you don't.
Each piece is modular. Use what you need, swap what you want, extend what's missing.

Typed Event Model

Every message, tool call, and result is a typed, serializable event. Perfect for replay, debugging, and audit trails.

Safe by Default

Path restrictions, command deny-lists, approval gates, sandboxed execution. Safety is declarative — not bolted on.

Pluggable Transports

CLI, Rich TUI, web API with SSE, or embed in your own code. Same agent, different interfaces — zero rewiring.

Observable

Every provider call and tool execution is timed. Sessions carry stable trace IDs. Built-in metrics — no extra wiring.

MCP Support

Connect GitHub, databases, file systems — any MCP server becomes native tools. Stdio and HTTP transports built-in.

Persistent Sessions

JSONL-based storage. Resume any conversation. Compact old sessions. Every run is replayable and auditable.

One interface. Any model.
Write your agent once. Run it on Claude, GPT-4o, Llama, or any OpenAI-compatible API. Switch providers with a single config change.

Anthropic

Claude Sonnet, Opus
Thinking blocks

OpenAI

GPT-5.x
Reasoning models

Ollama

Llama, Qwen, DeepSeek
Run locally

Generic

Any OpenAI-compatible
Azure, Together, etc.

Or build your own agent in 10 lines.
my_agent.py
from agent import Agent, AgentConfig
from agent.providers import AnthropicProvider

agent = Agent(
  config=AgentConfig(
    provider=AnthropicProvider(),
    system_prompt="You are a helpful assistant.",
  )
)

result = await agent.run("What can you help me with?")
print(result.content)

Simple by design. Powerful by default.

Create an agent in a few lines. Add tools with a decorator. Swap providers with a single import. No ceremony, no boilerplate.

  • Custom tools — register with a decorator and type hints. Schemas are inferred automatically.
  • Session resume — pick up any conversation where it left off. Sessions are JSONL — portable and inspectable.
  • MCP integration — pass an mcp_config.json and external tools appear as native.
  • Cancellation — cooperative and hard cancellation built-in. No zombie runs.
  • Extended thinking — Anthropic thinking blocks and OpenAI o1/o3 reasoning, first-class.
Guardrails, not guardhopes.
Safety in Aar is declarative and default-on. Define what's allowed — everything else is denied.

Path Restrictions

Glob-based deny-lists block access to .env, credentials, keys, and system files by default.

Approval Gates

Require human approval for writes, shell commands, or any side-effecting operation before execution.

Command Deny-lists

Dangerous shell patterns (rm -rf /, curl|sh, fork bombs) are blocked before they reach the OS.

Sandbox Modes

Run tools in local or subprocess sandboxes. Read-only mode blocks all side effects with a single flag.

Ready to build?

Aar is open source under Apache 2.0. Star it, fork it, break it, extend it.