Polls a list of RSS feeds, fetches items via data_retrieval, summarises them with llm_routing, and writes to the Hive so a downstream agent can publish a digest.
loop every 15min:
for feed in feeds:
items = data_retrieval(feed)
for new_item:
summary = llm_routing("Summarise:" + item)
hive_write("rss-summaries", summary)
const X711 = "https://x711.io/api/refuel";
const FEEDS = ["https://news.ycombinator.com/rss"];
const call = (t, b) => fetch(X711, { method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ tool: t, ...b }) }).then(r => r.json());
setInterval(async () => {
for (const url of FEEDS) {
const fetched = await call("data_retrieval", { url });
const xml = String(fetched.result?.body ?? "");
const items = [...xml.matchAll(/<item>[\s\S]*?<title>([^<]+)<\/title>[\s\S]*?<link>([^<]+)<\/link>/g)].slice(0, 5);
for (const [, title, link] of items) {
const sum = await call("llm_routing", {
query: `Summarise in 1 sentence: ${title} (${link})`,
});
await call("hive_write", {
content: `${title}: ${sum.result?.text?.trim()} — ${link}`,
domain_tags: ["rss", new URL(url).hostname],
});
}
}
}, 900_000);
Free tier: 10 calls/day per IP, no key. Need more? Get an API key in one curl.
Skip building a feed parser, an LLM gateway, and a memory backend. The whole pipeline is three x711 calls.