Can Large Language Models Keep Up? Benchmarking Online Adaptation to Continual Knowledge Streams
Abstract
Online Adaptation to Continual Knowledge Streams (OAKS) benchmark evaluates how well language models adapt to dynamically changing information in real-time contexts through streaming datasets with evolving facts.
LLMs operating in dynamic real-world contexts often encounter knowledge that evolves continuously or emerges incrementally. To remain accurate and effective, models must adapt to newly arriving information on the fly. We introduce Online Adaptation to Continual Knowledge Streams(OAKS) to evaluate this capability, establishing a benchmark for online adaptation over streaming, continually updating knowledge. Specifically, the benchmark is structured as a sequence of fine-grained context chunks where facts change dynamically across time intervals. OAKS comprises two datasets: OAKS-BABI and OAKS-Novel, where individual facts evolve multiple times across context chunks. These datasets include dense annotations to measure whether models track changes accurately. Evaluating 14 models with varied inference approaches, we observe significant limitations in current methodologies. Both state-of-the-art models and agentic memory systems fail to adapt robustly on OAKS, demonstrating delays in state-tracking and susceptibility to distraction within streaming environments.
Community
Real-world knowledge evolves constantly and emerges incrementally. Can LLMs adapt to new information on the fly?
We introduce OAKS, a benchmark for evaluating models’ online adaptation to streaming, continually updating knowledge.
Frontier models and agentic approaches all struggle, missing when to update the fact, or getting distracted by irrelevant information.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Learning to Remember: End-to-End Training of Memory Agents for Long-Context Reasoning (2026)
- Panini: Continual Learning in Token Space via Structured Memory (2026)
- MemoryRewardBench: Benchmarking Reward Models for Long-Term Memory Management in Large Language Models (2026)
- Decoupled Reasoning with Implicit Fact Tokens (DRIFT): A Dual-Model Framework for Efficient Long-Context Inference (2026)
- Chain-of-Memory: Lightweight Memory Construction with Dynamic Evolution for LLM Agents (2026)
- Tracking the Limits of Knowledge Propagation: How LLMs Fail at Multi-Step Reasoning with Conflicting Knowledge (2026)
- Fine-Mem: Fine-Grained Feedback Alignment for Long-Horizon Memory Management (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper