Energy News
TECH SPACE
Shrinking AI memory improves LLM accuracy
illustration only

Shrinking AI memory improves LLM accuracy

by Sophie Jenkins
London, UK (SPX) Dec 26, 2025

Researchers have developed a new way to compress the memory used by AI models to increase their accuracy in complex tasks or reduce the energy needed to run them.

Experts from the University of Edinburgh and NVIDIA found that large language models using memory eight times smaller than an uncompressed system scored better on maths, science, and coding tests while spending the same amount of time reasoning. The method can also be configured so that models respond to more user queries simultaneously, lowering the power required per task.

The approach focuses on the models' key-value cache, or KV cache, which stores segments of step-by-step reasoning sequences known as reasoning threads. As models generate more threads or extend them, the KV cache grows and becomes slower to retrieve, creating a bottleneck during inference when the system answers prompts.

To address this, the team developed Dynamic Memory Sparsification (DMS), a technique that compresses the KV cache by deciding which tokens to retain and which to delete. Instead of keeping every token, DMS selects those judged most important so the model keeps useful context while reducing memory use.

There is a short delay between deciding to delete tokens and actually removing them, which gives the model time to transfer valuable information from tokens that will be evicted into those that remain. By managing token eviction in this way, DMS allows the AI model to explore more possible solutions or reason in greater depth without extra compute.

The researchers tested DMS on different versions of the Llama and Qwen model families and compared their performance with non-compressed baselines. Even when memory was compressed to one eighth of its original size, large language models maintained their accuracy on difficult tasks and produced results faster than non-compressed systems.

In the AIME 24 mathematics test, which serves as a qualifier for the United States Mathematical Olympiad, compressed models performed twelve points better on average while using the same number of KV cache reads per answer. On GPQA Diamond, a set of complex questions in biology, chemistry, and physics authored by PhD-level experts, the compressed models scored more than eight points higher.

The models were also evaluated with LiveCode Bench, which measures how well AI systems write code. In these tests, compressed models scored about ten points better on average than non-compressed models, indicating that KV cache compression can preserve and enhance reasoning quality while operating with much smaller memory budgets.

The findings were peer reviewed and presented at the NeurIPS 2025 conference. The paper, titled "Inference-Time Hyper-Scaling with KV Cache Compression," is available at https://openreview.net/pdf?id=8ZiElzQxf1.

Dr Edoardo Ponti, GAIL Fellow and Lecturer in Natural Language Processing at the University's School of Informatics, said: "In a nutshell, our models can reason faster but with the same quality. Hence, for an equivalent time budget for reasoning, they can explore more and longer reasoning threads. This improves their ability to solve complex problems in maths, science, and coding."

Dr Ponti and his team will continue to study how large AI systems represent and remember information as part of a 1.5 million euros European Research Council-funded project called AToM-FM, which aims to make such systems more efficient and sustainable.

Research Report:Inference-Time Hyper-Scaling with KV Cache Compression

Related Links
University of Edinburgh
Space Technology News - Applications and Research

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
TECH SPACE
Life, Culture and AI: Why 'plagiarism' Is Our Default Operating System
Gerroa, Australia (SPX) Nov 28, 2025
As AI models are accused of stealing the world's creativity, a deeper view emerges: life, culture - and now machines - all run on ceaseless pattern-copying, much like an extreme, accelerated form of reading. The real novelty is that humans invented the idea of "Plagiarism" with a capital P. ... read more

TECH SPACE
France updates net-zero plan, with fossil fuel phaseout; Fight over fossil fuels nixes key text of UN environment report

EU agrees to weaken and delay green business rules

Policies to expand US grid weigh cost reliability and emissions

Keep energy infrastructure out of war, Turkey warns Moscow, Kyiv

TECH SPACE
Physicists map axion production paths inside deuterium tritium fusion reactors

The Quantum Age will be Powered by Fusion

Plasma turbulence plays dual roles in fusion reactors

France's 'Battery Valley' makes use of Asian experts

TECH SPACE
S.Africa seeks to save birds from wind turbine risks

Vertical wind turbines may soon power UK railways using tunnel airflow

Danish wind giant Orsted to cut workforce by a quarter

TECH SPACE
NUS team boosts durability of vapor deposited perovskite silicon tandem solar cells

Introducing the SEVEN Class A Thermopile Pyranometer

Bilayer tin oxide layer boosts back contact perovskite solar cell efficiency and stability

Acid treated carbon nanotubes raise efficiency and durability of flexible perovskite solar modules

TECH SPACE
Microbes join forces to quickly clean up uranium pollution

New analysis links lead cooled reactor corrosion to steel microstructure

Antares lines up $96 million to advance microreactor rollout

Framatome to modernize digital controls at Columbia Generating Station

TECH SPACE
Carbon monoxide enables rapid atomic scale control for fuel cell catalysts

Singapore sets course for 'green' methanol ship fuel supplies

Methane conversion enabled by iron catalyst delivers pharmaceutical compounds

Illinois team creates aviation fuel from food waste with circular economy benefits

TECH SPACE
ExxonMobil slows low-carbon investment push through 2030

Israel, Qatar and US hold trilateral meeting in New York

Lula orders road map to cut fossil-fuel use in Brazil

Iran Guards warn US vessels during drill in Gulf

TECH SPACE
Rain in Tehran brings relief from nationwide drought

US agency wipes climate change facts from website: reports

To counter climate denial, UN scientists must be 'clear' about human role: IPCC chief

Spire soil data to support conflict early warning in Ethiopias Somali Region

Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.