Interp3D: Correspondence-aware Interpolation for Generative Textured 3D Morphing
Abstract
Interp3D is a training-free framework for textured 3D morphing that preserves geometric consistency and texture alignment through generative priors and progressive alignment principles.
Textured 3D morphing seeks to generate smooth and plausible transitions between two 3D assets, preserving both structural coherence and fine-grained appearance. This ability is crucial not only for advancing 3D generation research but also for practical applications in animation, editing, and digital content creation. Existing approaches either operate directly on geometry, limiting them to shape-only morphing while neglecting textures, or extend 2D interpolation strategies into 3D, which often causes semantic ambiguity, structural misalignment, and texture blurring. These challenges underscore the necessity to jointly preserve geometric consistency, texture alignment, and robustness throughout the transition process. To address this, we propose Interp3D, a novel training-free framework for textured 3D morphing. It harnesses generative priors and adopts a progressive alignment principle to ensure both geometric fidelity and texture coherence. Starting from semantically aligned interpolation in condition space, Interp3D enforces structural consistency via SLAT (Structured Latent)-guided structure interpolation, and finally transfers appearance details through fine-grained texture fusion. For comprehensive evaluations, we construct a dedicated dataset, Interp3DData, with graded difficulty levels and assess generation results from fidelity, transition smoothness, and plausibility. Both quantitative metrics and human studies demonstrate the significant advantages of our proposed approach over previous methods. Source code is available at https://github.com/xiaolul2/Interp3D.
Community
In this work, we propose Interp3D, a training-free approach that instantiates the progressive alignment principle based on generative priors for textured 3D morphing.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MorphAny3D: Unleashing the Power of Structured Latent in 3D Morphing (2026)
- TEXTRIX: Latent Attribute Grid for Native Texture Generation and Beyond (2025)
- Photo3D: Advancing Photorealistic 3D Generation through Structure-Aligned Detail Enhancement (2025)
- Blur2Sharp: Human Novel Pose and View Synthesis with Generative Prior Refinement (2025)
- AlignVTOFF: Texture-Spatial Feature Alignment for High-Fidelity Virtual Try-Off (2026)
- Self-Evolving 3D Scene Generation from a Single Image (2025)
- SpaceControl: Introducing Test-Time Spatial Control to 3D Generative Modeling (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
arXivlens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/interp3d-correspondence-aware-interpolation-for-generative-textured-3d-morphing-1948-55fb821e
- Executive Summary
- Detailed Breakdown
- Practical Applications
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper