Experts from the University of Edinburgh and NVIDIA found that large language models using memory eight times smaller than an uncompressed system scored better on maths, science, and coding tests while spending the same amount of time reasoning. The method can also be configured so that models respond to more user queries simultaneously, lowering the power required per task.
The approach focuses on the models' key-value cache, or KV cache, which stores segments of step-by-step reasoning sequences known as reasoning threads. As models generate more threads or extend them, the KV cache grows and becomes slower to retrieve, creating a bottleneck during inference when the system answers prompts.
To address this, the team developed Dynamic Memory Sparsification (DMS), a technique that compresses the KV cache by deciding which tokens to retain and which to delete. Instead of keeping every token, DMS selects those judged most important so the model keeps useful context while reducing memory use.
There is a short delay between deciding to delete tokens and actually removing them, which gives the model time to transfer valuable information from tokens that will be evicted into those that remain. By managing token eviction in this way, DMS allows the AI model to explore more possible solutions or reason in greater depth without extra compute.
The researchers tested DMS on different versions of the Llama and Qwen model families and compared their performance with non-compressed baselines. Even when memory was compressed to one eighth of its original size, large language models maintained their accuracy on difficult tasks and produced results faster than non-compressed systems.
In the AIME 24 mathematics test, which serves as a qualifier for the United States Mathematical Olympiad, compressed models performed twelve points better on average while using the same number of KV cache reads per answer. On GPQA Diamond, a set of complex questions in biology, chemistry, and physics authored by PhD-level experts, the compressed models scored more than eight points higher.
The models were also evaluated with LiveCode Bench, which measures how well AI systems write code. In these tests, compressed models scored about ten points better on average than non-compressed models, indicating that KV cache compression can preserve and enhance reasoning quality while operating with much smaller memory budgets.
The findings were peer reviewed and presented at the NeurIPS 2025 conference. The paper, titled "Inference-Time Hyper-Scaling with KV Cache Compression," is available at https://openreview.net/pdf?id=8ZiElzQxf1.
Dr Edoardo Ponti, GAIL Fellow and Lecturer in Natural Language Processing at the University's School of Informatics, said: "In a nutshell, our models can reason faster but with the same quality. Hence, for an equivalent time budget for reasoning, they can explore more and longer reasoning threads. This improves their ability to solve complex problems in maths, science, and coding."
Dr Ponti and his team will continue to study how large AI systems represent and remember information as part of a 1.5 million euros European Research Council-funded project called AToM-FM, which aims to make such systems more efficient and sustainable.
Research Report:Inference-Time Hyper-Scaling with KV Cache Compression
Related Links
University of Edinburgh
Space Technology News - Applications and Research
| Subscribe Free To Our Daily Newsletters |
| Subscribe Free To Our Daily Newsletters |