Abstract
SkillEvolver enables continuous improvement of agent skills through iterative refinement based on actual usage failures rather than exploration traces, achieving higher accuracy across diverse domains.
Agent skills today are static artifact: authored once -- by human curation or one-shot generation from parametric knowledge -- and then consumed unchanged, with no mechanism to improve from real use. We propose SkillEvolver, a lightweight, plug-and-play solution for online skill learning, in which a single meta-skill iteratively authors, deploys, and refines domain-specific skills. The learning target of SkillEvolver is the skill's prose and code, not model weights, so that the resulting artifact drops into any agent without retraining; and the meta-skill itself is just another skill, loaded through the same interface by any protocol-compliant CLI-agent. Unlike trace-distillation, the meta-skill refines only after deploying the learnt skill, such that the learning signal comes from failures another agent encounters while using it -- not from exploratory traces alone. Refinement iterations are governed by a fresh-agent overfit audit that catches possible leakage as well as deployed-skill-specific failures, including the silent-bypass mode in which a skill appears valid in content but is never invoked at runtime. On 83 SkillsBench tasks spanning 15^{+} domains, SkillEvolver reaches 56.8% accuracy versus 43.6% for curated human skills and 29.9% for the no-skill baseline; on three GPU kernel optimization tasks from KernelBench, it also raises mean speedup from 1.16 to 1.51 on average.
Get this paper in your agent:
hf papers read 2605.10500 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper