[CVPR2026] Generalizable Knowledge Distillation from Vision Foundation Models for Semantic Segmentation
This repository contains model weights after domain-general distillation and task learning.
Overview
Motivation: Conventional knowledge distillation approaches primarily preserve in-domain accuracy while neglecting out-of-domain generalization, which is essential under distribution shifts. This limitation becomes more severe with the emergence of vision foundation models (VFMs): although VFMs exhibit strong robustness on unseen data, distilling them with conventional KD often compromises this ability.Methodology: GKD decouples representation learning from task learning. In the first stage, the student acquires domain-agnostic representations through selective feature distillation, and in the second stage, these representations are frozen for task adaptation, thereby mitigating overfitting to visible domains. To further support transfer, we introduce a query-based soft distillation mechanism, where student features act as queries to teacher representations to selectively retrieve transferable spatial knowledge from VFMs. Extensive experiments on five domain generalization benchmarks demonstrate that GKD consistently outperforms existing KD methods, achieving average gains of +1.9% in foundation-to-foundation (F2F) and +10.6% in foundation-to-local (F2L) distillation.
Comparison with State-of-the-Art KD
Comparison with State-of-the-Art Domain Generalization (DG)
Model tree for yongers/GKD
Base model
facebook/deit-base-patch16-224