TAPESTRY: From Geometry to Appearance via Consistent Turntable Videos

Yan Zeng1,2*, Haoran Jiang1,2*, Kaixin Yao1,2*, Qixuan Zhang1,2†, Longwen Zhang1,2†, Lan Xu1‡, Jingyi Yu1‡,
1ShanghaiTech University 2Deemos Technology
*Equal Contribution Project Leader Corresponding Author
Pipeline Illustration

We introduce TAPESTRY for high-fidelity 3D appearance generation by synthesizing a reconstructable Turntable Video through strong geometric conditioning. This highly consistent video then serves as a robust data source to create a final, high-quality asset.

Abstract

Automatically generating photorealistic and self-consistent appearances for untextured 3D models is a critical challenge in digital content creation. The advancement of large-scale video generation models offers a natural approach: directly synthesizing 360-degree turntable videos, which can serve not only as high-quality dynamic previews but also as an intermediate representation to drive texture synthesis and neural rendering. However, existing general-purpose video diffusion models struggle to maintain strict geometric consistency and appearance stability across the full range of views, making their outputs ill-suited for high-quality 3D reconstruction. To this end, we introduce TAPESTRY, a framework for generating high-fidelity turntable videos conditioned on explicit 3D geometry. We reframe the 3D appearance generation task as a geometry-conditioned video diffusion problem: given a 3D mesh, we first render and encode multi-modal geometric features to constrain the video generation process with pixel-level precision, thereby enabling the creation of high-quality and consistent turntable videos. Building upon this, we also design a method for downstream reconstruction tasks from the TTV input, featuring a multi-stage pipeline with 3D-Aware Inpainting. By rotating the model and performing a context-aware secondary generation, this pipeline effectively completes self-occluded regions to achieve full surface coverage. The videos generated by TAPESTRY are not only high-quality dynamic previews but also serve as a reliable, 3D-aware intermediate representation that can be seamlessly back-projected into UV textures or used to supervise neural rendering methods like Gaussian Splatting. This enables the automated creation of production-ready, complete 3D assets from untextured meshes. Experimental results demonstrate that our method significantly outperforms existing approaches in both video consistency and final reconstruction quality.

Video

Pipeline

Pipeline Illustration

An overview of the TAPESTRY architecture. (a) Geometry-guided video generation. Our method generates a 3D consistent Turntable Video by injecting multi-modal geometric conditions and reference context into a DiT-based video diffusion model. (b) Our progressive texturing pipeline. We iteratively generate TTVs from new, optimized viewpoints and fuse their projections via Texture Baking. Each pass is conditioned on previously generated textures to ensure global consistency, continuing until full coverage is achieved.

Results

BibTeX

@misc{zeng2026tapestrygeometryappearanceconsistent,
      title={TAPESTRY: From Geometry to Appearance via Consistent Turntable Videos}, 
      author={Yan Zeng and Haoran Jiang and Kaixin Yao and Qixuan Zhang and Longwen Zhang and Lan Xu and Jingyi Yu},
      year={2026},
      eprint={2603.17735},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2603.17735}, 
}