I shipped 31 open-source Rust CLI tools in one project. Not 31 features in one tool - 31 separate crates, each doing exactly one thing, each installable on its own. That project is dee.ink, and building it changed how I think about the right interface for AI agents. If you’re building agentic systems and haven’t thought hard about open-source Rust CLI tools for AI agents as an alternative to MCP servers, you’re leaving serious efficiency on the table.
The short version: CLI tools are dramatically more token-efficient than MCP servers for AI agent workflows. I measured 35x in my own benchmarks. Once you see that number, you can’t unsee it.
Here’s how it happened and why it matters.
Why I built dee.ink in the first place
I run a multi-agent system called OpenClaw. If you’ve read castkit, you know it handles my daily workflow - morning digests, research, crypto monitoring, content scheduling, health data. It’s not a demo. It processes real work every day.
Agents need to reach into the world. Check Hacker News. Look up SSL cert expiry. Parse an RSS feed. Generate a QR code. Turn a receipt photo into structured data. These aren’t complex tasks but they come up constantly, and every time an agent needs to do one, it needs a tool.
The popular answer right now is MCP - Model Context Protocol, Anthropic’s standard for agent tool-calling. I tried it. The overhead is real: each tool call needs a running server, connection setup, and verbose JSON-RPC framing. For stateful tools or bidirectional streams, that overhead makes sense. For “search Hacker News and return the top 10 posts,” it’s wasteful by design.
So I built CLIs instead. One tool per job. JSON output. No interactive prompts. Works with pipes. That’s it.
After I’d built a few for my own use, I realized I had the start of something worth packaging and open-sourcing. dee.ink is the result: 31 standalone Rust CLI tools built specifically to be called by AI agents.
Open-source Rust CLI tools for AI agents vs MCP: the real argument
Let me be concrete about why CLI beats MCP for most agent tool use.
An MCP server for a simple search tool looks roughly like this from the agent’s perspective: spin up the server process (or connect to an already-running one), send a JSON-RPC request with the method name and parameters, wait for the response envelope, parse the result out of the envelope. The agent has to know the MCP protocol, or more accurately, the framework wrapping the agent has to know it.
A CLI tool looks like this: ink-hn top --limit 10 --json. That’s it. The agent gets back a clean JSON array.
The token efficiency gap comes from a few places. First, CLI invocation syntax is compact. A shell command is 5-20 tokens. An MCP request envelope is 50-200 tokens before you even add the parameters. Second, every LLM ever trained has seen millions of shell commands in its training data. They’re native CLI speakers. They’re not native JSON-RPC speakers - you can see this in how confidently models generate shell invocations vs. how often they fumble JSON-RPC schema details. Third, no server to maintain means no connection overhead, no process management, no “is the server running?” failure mode.
The 35x efficiency number comes from comparing token usage for the same operations across both approaches in my own OpenClaw setup. It’s not a controlled academic study but it’s real usage data from a system that runs 14 cron jobs daily.
There are cases where MCP is genuinely better. Long-running sessions where you want persistent state. Bidirectional streams. Tools that need to push updates back to the agent rather than return a one-shot result. For those, use MCP. But “search HN” or “check WHOIS” or “generate an invoice”? CLI wins every time.
One more thing people miss: debugging. When an MCP tool call fails inside a framework like LangChain or CrewAI, you’re often staring at a wrapped exception with zero useful context. When a CLI tool fails, you have a shell command, an exit code, and stderr output. You can reproduce it in 10 seconds. That matters a lot when you’re maintaining a system that runs overnight.
What’s inside the toolkit
The dee.ink toolkit is 31 crates across six categories. Let me walk through them.
Data and research: dee-hn, dee-arxiv, dee-reddit, dee-wiki, dee-feed, dee-ph. These are the tools I use most. An agent can check Hacker News trending stories, pull an arXiv abstract by ID, search Reddit, look up a Wikipedia article, parse an RSS feed, or get Product Hunt launches - all in one command, all as JSON.
Financial: dee-invoice, dee-receipt, dee-rates, dee-pricewatch, dee-ebay, dee-amazon. Generate invoices, parse receipts, check exchange rates, watch prices, search marketplaces.
Personal productivity: dee-contacts, dee-habit, dee-todo, dee-timer, dee-stash. The local storage tools here use SQLite via rusqlite. Your data stays on your machine. Agents can manage your contacts, log habits, check todos, start timers, or stash arbitrary data for later retrieval.
Developer tools: dee-openrouter, dee-ssl, dee-whois, dee-qr, dee-porkbun. Check SSL cert expiry, run WHOIS lookups, generate QR codes, manage Porkbun DNS records, query OpenRouter for available models and pricing.
Location: dee-food, dee-events, dee-parking, dee-gas, dee-transit. Local-aware tools for finding restaurants, events, parking, gas prices, and transit schedules.
Social and trends: dee-crosspost, dee-mentions, dee-trends. Cross-post content, monitor mentions, check trend data.
Each crate is fully standalone. Installing dee-ssl doesn’t pull in any shared dee-core dependency. You get exactly what you need, nothing more.
The technical stack
I picked Rust for a few reasons that aren’t just “Rust is fast.”

Binary size matters when you’re shipping 31 separate tools. A Go binary for a simple CLI is usually 10-15MB. A stripped Rust binary for the same tool comes in under 3MB. When someone does cargo install dee-hn, they’re downloading and compiling one focused tool. Small is respectful.
The other reason: Rust’s clap v4 with derive macros makes argument parsing almost free to write. The --help output is generated automatically from your struct definitions. Every tool in dee.ink has --help with actual usage examples because making that happen is nearly zero effort.
Here’s what the argument struct looks like for a typical tool:
#[derive(Parser, Debug)]
#[command(name = "ink-hn", about = "Hacker News CLI for AI agents")]
struct Args {
#[command(subcommand)]
command: Command,
/// Output as JSON
#[arg(long, global = true)]
json: bool,
}
#[derive(Subcommand, Debug)]
enum Command {
/// Get top stories
Top {
/// Number of stories to fetch
#[arg(long, default_value = "10")]
limit: usize,
},
/// Search stories
Search {
/// Search query
query: String,
},
}
For tools with local storage (todo, habit, contacts, stash), SQLite handles persistence via rusqlite. No database server, no config files to manage. The database lives at ~/.local/share/dee-toolname/data.db and everything just works.
HTTP is either reqwest (for complex clients with retries) or ureq (for simpler one-shot requests). I pick ureq when I can - it compiles faster and the binary is smaller.
Exit codes are strict: 0 for success, 1 for tool error, 2 for usage error. Agents need reliable exit codes to know if a command succeeded. This sounds obvious but a surprising number of CLIs return 0 on failure because someone forgot to handle an error branch. In agent workflows that kills you - your orchestrator thinks the call succeeded and moves on with bad data.
Agent-first design decisions
This is the part that’s different from building a CLI for humans.
No interactive prompts. Ever. If an argument is missing, the tool errors out with a clear message. It doesn’t ask “did you mean this file? (y/n)”. Agents can’t answer interactive prompts. A tool that blocks waiting for keyboard input is broken for agent use.
Every tool has a --json flag that guarantees structured output. Without --json, tools print human-readable text. With --json, they print a JSON object or array, always to stdout, always parseable. No mixed text/JSON output. No progress bars to stdout (they go to stderr or get suppressed).
Pipe support is first-class. Tools accept stdin when appropriate. You can chain them:
ink-feed parse https://hnrss.org/frontpage | ink-stash save --key hn-today
Each tool ships an AGENT.md file in the repo. This is a short markdown document explaining to an AI agent how to use the tool effectively - what the flags do, what the output schema looks like, common patterns. When an agent needs to use a tool it hasn’t seen before, it can read AGENT.md and understand the interface without trial and error.
There’s also a FRAMEWORK.md at the repo root that defines the conventions every tool follows. Any agent that has read FRAMEWORK.md can make reasonable guesses about how any dee.ink tool works. That’s intentional. Consistency is the whole point.
What “consistent” actually means in practice
Every tool uses the same flag names for the same concepts. Pagination is always --page and --limit, never --offset or --per-page or --count. JSON mode is always --json, never --format json or --output json. Verbose mode is always -v.
This matters because agents build up a mental model of your toolset. If the first five tools they use have consistent interfaces, they’ll correctly predict how the sixth one works. If your flags are inconsistent, the agent has to treat each tool as a new unknown - which costs tokens and causes errors.
It’s the same principle as a good design system. The value isn’t any single component. It’s the pattern that makes everything predictable.
How open-source Rust CLI tools fit into a real agent workflow
Concretely, here’s how one of these tools actually gets used inside OpenClaw. My morning digest job runs at 7am. It pulls Hacker News top stories, recent arXiv papers in a few categories, and Reddit posts from a handful of subs. Then it summarizes and formats everything into a digest I read with coffee.
The shell side of that looks roughly like:
ink-hn top --limit 20 --json > /tmp/hn.json
ink-arxiv search "multi-agent systems" --days 7 --json > /tmp/arxiv.json
ink-reddit hot r/MachineLearning --limit 15 --json > /tmp/reddit.json
Three commands. Three JSON files. The orchestrating agent reads those files, does the summarization, formats the output. Total token cost for the data collection phase: maybe 150 tokens across all three commands. The MCP equivalent for the same three sources would be three server connections, three request envelopes, three response envelopes. Easily 10x the token overhead, plus you need three MCP servers running.
That’s not a toy example. That’s the actual flow, running every morning.
The installation experience
cargo install dee-hn
ink-hn top --limit 5 --json

That’s the whole install flow. No Docker. No Python virtual env. No npm. Cargo installs the binary, it goes in ~/.cargo/bin, and it’s available system-wide. For agent use in particular, this matters - you don’t want to manage environments when your agent needs to call a tool.
For people who want everything at once, I’m working on a meta-crate that installs the full suite, but honestly most people only need a subset. The standalone install is the right default.
You can find all 31 crates on the dee.ink site and browse the source on GitHub. Every tool is MIT licensed.
Why open source, and what I’m getting out of it
I’m not monetizing dee.ink directly. No SaaS wrapper, no premium tier. The tools are free, the code is open.
The actual return is authority and credibility. Shipping 31 production-quality Rust CLI tools is a more compelling signal than any portfolio piece I could write. Developers can read the code, use the tools, see the design decisions. That’s a much better “hire me / work with me” artifact than a case study PDF.
It also forces quality. When something is public, you think twice about the shortcuts. Every AGENT.md, every --help example, every error message is a little more considered because someone else might read it.
And honestly? The tooling gap was real. I needed these tools for OpenClaw. If they didn’t exist, I’d have built them for private use anyway. Open-sourcing them was 20% more work for a much better outcome.
What comes next
31 tools is a good start but there are obvious gaps. I’m planning tools for GitHub (issues, PRs, repo stats), calendar integration, and a few more financial data sources. The architecture makes adding tools easy - each one is genuinely independent, so adding dee-github doesn’t touch anything that already ships.
I’m also watching how people actually use these in their own agent setups. If you’re building with Claude Code, Cursor, or any other agent framework and want CLI tools that just work, this is worth checking out. If you find a gap, open an issue or PR. The whole thing is built to be extended.
You can the universal MCP server and follow the build log here on the blog. If you’re interested in the agent workflow side - how OpenClaw actually orchestrates all of this - I’ll be writing that up next. Subscribe or check back.
The tools are at dee.ink. The code is on GitHub. Install one and see if it fits your stack. And if you want to browse more posts on agent architecture and tooling, check the tags page.