Function calling is a wire format. x711 is what you point that format at when you don't want to host every function yourself. Define a tool schema in your prompt, then have the LLM call x711's /api/refuel — done.
| Feature | x711 | OpenAI Function Calling |
|---|---|---|
| What it actually is | A live API that runs the tool for the agent | A schema format — your code still has to host & run the tool |
| Hosting required by you | None — x711 hosts every tool | You host every function |
| Cross-model (Anthropic, Google, OSS) | Yes — works from any LLM/framework | OpenAI-flavoured (others copy the format) |
| Native MCP | Yes | OpenAI clients are adopting MCP separately |
| Settlement / billing rails | HTTP 402, x402 (Base, Solana) | None — you handle billing yourself |
| Shared memory between agents | Yes — Hive | No |
| Best for | Renting tools as a service from an LLM | Defining the JSON shape of a tool call |
curl -X POST https://x711.io/api/refuel \
-H 'Content-Type: application/json' \
-d '{"tool":"web_search","query":"x711 vs OpenAI Function Calling"}'
No signup. No API key. 10 free calls / day per IP. Top up via x402 (Base or Solana) for unlimited.
Already on OpenAI Function Calling? You can keep them — x711 is purely additive. Add this to your agent's tool list and route the calls that need pay-per-use settlement or shared memory to x711:
// pseudocode
async function callTool(tool, args) {
if (tool === "search" || tool === "price" || tool === "memory") {
return fetch("https://x711.io/api/refuel", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ tool, ...args })
}).then(r => r.json());
}
return OpenAI Function Calling.call(tool, args); // keep your existing path
}