How Many Subnets Can You Handle? Onicai’s LLM Stress Test is Heating Up

There’s no flashing banner, no countdown, and no launch hype. Just a quiet post from the Onicai team letting the Internet Computer community know they’re steadily increasing the number of funnAI mAIners in their ongoing test phase. But beneath that calm exterior is a series of stress tests pushing on-chain large language models further than most imagined feasible.

Image

Onicai has been running its tests across four subnets on ICP so far. The aim is clear: see how much load those subnets can handle when running LLMs natively on-chain—no off-chain relay, no third-party compute, no clever shortcut. Just pure, raw ICP infrastructure being asked to carry the weight of serious, stateful inference.

The choice to begin with four subnets isn’t arbitrary. It’s a significant step in distributing processing and bandwidth load, especially when dealing with LLMs, which tend to be resource-hungry. This isn’t about single-message inference or offloading complex parts of the task elsewhere. The team’s long-term goal has always been to build fully on-chain intelligence tools, which means the infrastructure needs to show it can keep up when things scale.

And scale it will. The next target is 10 or more subnets. That number may not sound big in isolation, but on ICP, each subnet represents a distinct and secure zone capable of running smart contracts at web speed. Getting ten of them humming in sync under a live, test-heavy load is no small task.

The stress testing itself involves loading these subnets with parallel prompts, token generation requests, and inference chains—some designed to mimic real-world product usage, others meant purely to push the system to breaking point. That’s where the term “funnAI mAIners” comes in. A play on words, these miners aren’t traditional Proof-of-Work units but instead represent actor canisters that simulate user behaviour and trigger AI tasks under varied network conditions.

By monitoring throughput, latency, and subnet-level failure handling, Onicai is collecting critical data that will feed into how it designs routing, load balancing, and memory management for the upcoming public version of its on-chain LLM products.

There’s been a growing curiosity across crypto and Web3 circles about where real on-chain AI will emerge. Most AI + blockchain projects rely heavily on off-chain inference, returning results via oracles or data bridges. Onicai’s route is different. It leans entirely on the ICP stack, trusting that the platform can meet the demanding requirements of large-scale AI workloads.

If the current tests are anything to go by, that trust is being rewarded. Reports from team members suggest that the four-subnet phase is producing encouraging results. The system’s ability to process concurrent LLM queries without noticeable degradation in performance has exceeded initial expectations.

What sets this apart from typical “testnet drama” is how little noise there is around it. There are no inflated promises about token economics or AI disrupting everything. Just solid updates, technical clarity, and consistent, visible progress. That’s earned Onicai a growing reputation as one of the more grounded and technically capable projects in the ecosystem.

The decision to move towards ten subnets next is strategic. It not only expands capacity but will also test the ability of canister coordination at scale. This will help refine how Onicai’s models are split, distributed, and synchronised—a critical piece of the puzzle for anyone trying to run heavy computation on decentralised infrastructure.

And the choice of platform matters. The Internet Computer’s architecture, with its native canister model and fast finality, allows for a level of interactivity that’s difficult to replicate elsewhere. Onicai is exploiting that design space fully—testing not just whether LLMs can run on-chain, but whether they can do so responsively, securely, and at the edge of available compute.

It’s also refreshing to see a project treat test scaling like what it is—a hard, technical milestone. There’s no need to brand it as a revolution. The simple act of pushing subnets harder, watching the metrics, and reporting honestly already speaks volumes.

For developers, this update might be the nudge to take a second look at what’s happening under the hood of the Internet Computer. While other chains wrestle with external inference layers or wait for modular upgrades, here’s a project already running load tests with real LLMs across multiple coordinated subnets.

This isn’t just about pushing limits for the sake of it. These subnet tests are building the bedrock for products that could soon run AI interactions, data generation, summarisation, and task automation—all without ever leaving the chain.

For the curious, this means two things. First, it’s now technically feasible to run sophisticated AI models entirely on a decentralised platform. Second, the infrastructure needed to do it is being quietly mapped out—not promised, but tested, measured, and improved with each cycle.

It’s the kind of work that flies under the radar until, suddenly, it doesn’t. When the public version lands, users may notice how fast it runs, how private it feels, how seamlessly it integrates with other on-chain tools. What they may not see is all the subnet juggling, stress balancing, and prompt load testing that happened quietly in the background.

And that’s okay. Because the best kind of infrastructure doesn’t ask for your attention—it just works. Onicai’s update this week may have been brief, but it signals something bigger: serious, on-chain AI is moving past the experiment stage. One subnet at a time.


Dear Reader,

Ledger Life is an independent platform dedicated to covering the Internet Computer (ICP) ecosystem and beyond. We focus on real stories, builder updates, project launches, and the quiet innovations that often get missed.

We’re not backed by sponsors. We rely on readers like you.

If you find value in what we publish—whether it’s deep dives into dApps, explainers on decentralised tech, or just keeping track of what’s moving in Web3—please consider making a donation. It helps us cover costs, stay consistent, and remain truly independent.

Your support goes a long way.

🧠 ICP Principal: ins6i-d53ug-zxmgh-qvum3-r3pvl-ufcvu-bdyon-ovzdy-d26k3-lgq2v-3qe

🧾 ICP Address: f8deb966878f8b83204b251d5d799e0345ea72b8e62e8cf9da8d8830e1b3b05f

🪙 BTC Wallet: bc1pp5kuez9r2atdmrp4jmu6fxersny4uhnaxyrxau4dg7365je8sy2q9zff6p

Every contribution helps keep the lights on, the stories flowing, and the crypto clutter out.

Thank you for reading, sharing, and being part of this experiment in decentralised media.
—Team Ledger Life

Subscribe

Related articles

Brew or Burn? The ICP-Caffeine Token Tussle Gets Spicy

It’s hard to talk about Caffeine AI without triggering...

Bob’s Big Decision: Freeze, Surrender, or Stay Wild?

The vote’s in, but the conversation isn’t done. The...

RichSwap Lights a Candle

The days of guessing and hoping are quietly being...

Misunderstanding ‘Onchain’ Is Holding Web3 Back

There’s a moment in every long experiment when the...

Built Before Boarding: Win95 Style, Fire Sprites, and a Flying Start for Caffeine

The first-ever project built using Caffeine, an AI-powered app...
Maria Irene
Maria Irenehttp://ledgerlife.io/
Maria Irene is a multi-faceted journalist with a focus on various domains including Cryptocurrency, NFTs, Real Estate, Energy, and Macroeconomics. With over a year of experience, she has produced an array of video content, news stories, and in-depth analyses. Her journalistic endeavours also involve a detailed exploration of the Australia-India partnership, pinpointing avenues for mutual collaboration. In addition to her work in journalism, Maria crafts easily digestible financial content for a specialised platform, demystifying complex economic theories for the layperson. She holds a strong belief that journalism should go beyond mere reporting; it should instigate meaningful discussions and effect change by spotlighting vital global issues. Committed to enriching public discourse, Maria aims to keep her audience not just well-informed, but also actively engaged across various platforms, encouraging them to partake in crucial global conversations.

LEAVE A REPLY

Please enter your comment!
Please enter your name here