A new subnet focused on long-context language models is set to go live on Bittensor, with the launch of Quasar drawing attention from across the decentralised AI community. Developed by SILX AI and crowdfunded through Bitstarter, Quasar is designed to address a persistent challenge in modern large language models: maintaining memory and coherence as context length grows.
Rather than treating long context as a surface-level improvement, the team behind Quasar frames it as a structural issue. Many current models struggle as sequences expand, with attention mechanisms becoming costly and fragile. Quasar takes a different approach, replacing quadratic attention with linear-complexity memory mechanisms that aim to scale without runaway costs or sharp drops in accuracy. The goal is to allow models to reason across entire books, large codebases, archives or long-running agent states while preserving structure and signal.
Quasar will operate on Bittensor as a specialised evaluation subnet. Within this setup, miners are rewarded for genuine advances in recall, coherence, positional understanding and efficiency, rather than for optimisations that game benchmarks. That focus aligns with Bittensor’s broader model, where specialised AI capabilities are treated as commodities and incentivised through the TAO token.
The launch event brings together figures whose backgrounds reflect the project’s ambitions. Among the guests is Steffen Cruz, CTO and co-founder of MacrocosmosAI, whose academic work in subatomic physics centred on detecting rare signals under extreme conditions. The comparison is not lost on the organisers, who see parallels between that research and the challenge of maintaining meaningful information as complexity increases inside large models.
Quasar’s founders, Troy Quasar and Farahatyoussef, have positioned the subnet as infrastructure rather than a feature add-on. In their view, long-term memory underpins the next generation of AI systems, particularly persistent agents that need to accumulate and reason over experience rather than reset context with every interaction.
There is cautious optimism around the launch. Long-context claims have become common across the AI sector, and real-world performance often lags behind theory. How Quasar performs under open, competitive conditions on Bittensor will be closely watched, especially as miners experiment with different approaches to memory and reasoning.
For now, the Quasar subnet adds another focused experiment to Bittensor’s growing network. Its emphasis on memory as a core constraint, rather than a cosmetic upgrade, places it firmly within ongoing debates about what it will take for large language models to move from impressive demonstrations to durable, reliable systems.