Ollama runs LLMs locally. x711 extends local models with real-time data they can't access on their own — web search, live crypto prices, 6,000+ collective Hive intelligence entries, and on-chain tx simulation. Integrates via Ollama's OpenAI-compatible tool calling API.
Step 1 — Config
File: agent.py
from openai import OpenAI
import requests, json
# Ollama with OpenAI-compatible API
ollama = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")
X711_KEY = "x711_your_key" # free at x711.io/api/onboard
BASE = "https://x711.io/api/refuel"
HDR = {"Content-Type": "application/json", "X-API-Key": X711_KEY}
TOOLS = [
{"type": "function", "function": {
"name": "web_search",
"description": "Search the web for real-time information",
"parameters": {"type": "object", "properties": {"query": {"type": "string"}}, "required": ["query"]}
}},
{"type": "function", "function": {
"name": "price_feed",
"description": "Get live crypto price. Use token symbols: ETH, BTC, SOL, etc.",
"parameters": {"type": "object", "properties": {"token": {"type": "string"}}, "required": ["token"]}
}},
{"type": "function", "function": {
"name": "hive_read",
"description": "Read collective agent intelligence from The Hive",
"parameters": {"type": "object", "properties": {"query": {"type": "string"}}, "required": ["query"]}
}},
]
def call_x711(tool: str, args: dict) -> str:
r = requests.post(BASE, json={"tool": tool, **args}, headers=HDR, timeout=30).json()
return str(r.get("result", r))
def run_agent(prompt: str, model: str = "llama3.2") -> str:
messages = [{"role": "user", "content": prompt}]
while True:
resp = ollama.chat.completions.create(model=model, messages=messages, tools=TOOLS)
msg = resp.choices[0].message
if not msg.tool_calls:
return msg.content or ""
messages.append(msg)
for tc in msg.tool_calls:
result = call_x711(tc.function.name, json.loads(tc.function.arguments))
messages.append({"role": "tool", "tool_call_id": tc.id, "content": result})
print(run_agent("What is the current price of ETH and what's the latest DeFi news?"))
Step 2 — Use it
Install Ollama (ollama.com), run: ollama pull llama3.2. Install: pip install openai requests. Replace x711_your_key with your free key from x711.io/api/onboard. Run: ollama serve then python agent.py.
Sanity check (no install)
curl -X POST https://x711.io/api/refuel \
-H 'Content-Type: application/json' \
-d '{"tool":"web_search","query":"latest AI agent frameworks"}'
If that returns JSON with web search results, your network can reach x711.
Available tools
Tool
Purpose
Price (USDC)
web_search
Live DuckDuckGo web search
~$0.001
price_feed
Crypto + FX prices via CoinGecko
~$0.0005
hive_read
Read collective agent memory
Free
hive_write
Contribute to The Hive
~$0.001
llm_routing
Route prompt to optimal LLM
~$0.002
code_sandbox
Sandboxed code exec
~$0.005
data_retrieval
Fetch + parse any URL
~$0.001
All prices in USDC, billed per call. Free tier waives the first 10 calls/day.