At World Computer Day in Davos, top AI researchers from Stanford, ETH Zurich, and Google shared insights on the current state and future of artificial intelligence. The session, moderated by Mike Butcher, highlighted the intersection of neuroscience, large language models, and applied AI, offering a window into how experts are approaching the challenge of creating more adaptable and trustworthy systems.
Professor Andreas Tolias of Stanford focused on understanding the human brain as a guide for AI development. He explained that the brain remains the only known system capable of general intelligence, able to adapt quickly to new situations and learn from very few examples. “We are trying to identify principles that allow the brain to generalise outside of what it has seen before,” he said, noting that current AI systems still rely heavily on patterns learned from large datasets and often struggle in unfamiliar environments. Tolias described recent neuroscience advances that allow researchers to collect detailed neural data, which could be used to train AI models that learn more efficiently and behave more reliably in real-world situations.
Dr Alexander Ilic, co‑founder and executive director of the ETH AI Centre, highlighted the importance of cross-disciplinary approaches. He suggested that much of today’s AI, including large language models and agent-based systems, is a scaled-up version of research developed years ago. “There is still a vast portion of AI potential that remains untapped,” he said. Ilic also emphasised aligning AI behaviour with human intent, making systems that can anticipate needs rather than only respond to prompts. He noted that AI is still largely bound to screens and lacks the speed and flexibility of human learning, and that progress will require both smarter architectures and novel approaches to interaction.
James Rubin, lead product manager at Google’s Gemini Applied Research team, described the growing challenge of the attention economy. With content production becoming almost costless, human attention has become a scarce resource. Rubin explained that AI’s role is increasingly about delivering the right information at the right time to enhance productivity rather than simply generating more data. He also outlined how his team works to connect foundational AI research with the practical demands of users across consumer products and enterprise solutions.
The discussion covered what it means to “decode intelligence.” Tolias described the goal as moving AI beyond pattern matching to reasoning that generalises across domains. Rubin and Ilic highlighted the technical challenges that remain, such as catastrophic forgetting, where models lose previously acquired knowledge, and the difficulty of translating insights from neuroscience into AI architectures that operate reliably in dynamic environments. They agreed that bridging these gaps is key to building AI systems that can operate autonomously while remaining trustworthy and aligned with human goals.
Emerging directions in AI research were a key focus. The researchers highlighted the potential of using neural data to improve learning efficiency, developing AI that can continuously adapt without losing prior knowledge, and combining human-like adaptability with computational scale. They also discussed the growing importance of collaborations between academia and industry. Large-scale projects, such as Stanford’s Enigma project, are collecting unprecedented volumes of neural data, which can inform both basic research and practical AI applications. Partnerships with industry can provide the computational power needed to test these insights at scale and accelerate real-world adoption.
Applications of these research directions extend beyond typical AI tasks. The panel pointed to areas such as robotics, scientific experimentation, and agent-based systems as opportunities where AI can augment human capabilities, optimise processes, and explore new strategies. Rubin noted that AI could interact with digital twins of complex systems, enabling automated experimentation and feedback loops that accelerate discovery. Ilic added that aligning AI with human intent, especially in real-time systems, will be essential for next-generation personal devices, including smartphones and domestic robotics.
While acknowledging progress in AI, the experts stressed the complexity of replicating the brain’s efficiency and adaptability. Tolias pointed to neuroscience breakthroughs that provide high-resolution data on neural networks, while Ilic and Rubin noted that improvements in model architectures, inference efficiency, and multimodal learning will be crucial in the coming years. The panel agreed that AI development will require both rigorous scientific understanding and practical experimentation to achieve systems capable of general intelligence.
The session at Davos offered a clear-eyed view of AI’s near-term achievements and long-term challenges. By combining insights from neuroscience, large-scale modelling, and applied research, the field is exploring a path toward AI that is more flexible, reliable, and capable of meaningful interaction with humans, while maintaining rigorous attention to safety and alignment.
Dear Reader,
Ledger Life is an independent platform dedicated to covering the Internet Computer (ICP) ecosystem and beyond. We focus on real stories, builder updates, project launches, and the quiet innovations that often get missed.
We’re not backed by sponsors. We rely on readers like you.
If you find value in what we publish—whether it’s deep dives into dApps, explainers on decentralised tech, or just keeping track of what’s moving in Web3—please consider making a donation. It helps us cover costs, stay consistent, and remain truly independent.
Your support goes a long way.
🧠 ICP Principal: ins6i-d53ug-zxmgh-qvum3-r3pvl-ufcvu-bdyon-ovzdy-d26k3-lgq2v-3qe
🧾 ICP Address: f8deb966878f8b83204b251d5d799e0345ea72b8e62e8cf9da8d8830e1b3b05f
Every contribution helps keep the lights on, the stories flowing, and the crypto clutter out.
Thank you for reading, sharing, and being part of this experiment in decentralised media.
—Team Ledger Life




