Material Magic Wand: Material-Aware Grouping of 3D Parts in Untextured Meshes

1University of Toronto, 2Adobe Research
*Work done partially during internship at Adobe
Equal advising
CVPR 2026
Teaser result for Material Magic Wand

TL;DR: We introduce Material Magic Wand, a tool for material-aware grouping of parts in untextured 3D meshes. Given one selected part, it automatically retrieves the other parts in the same shape that are likely to share its material, using both geometric and contextual cues.

Overview

We introduce the problem of material-aware part grouping in untextured meshes. Many real-world shapes, such as scales of pinecones or windows of buildings, contain repeated structures that share the same material but exhibit geometric variations. When assigning materials to such meshes, these repeated parts often require piece-by-piece manual identification and selection, which is tedious and time-consuming. To address this, we propose Material Magic Wand, a tool that allows artists to select part groups based on their estimated material properties. When one part is selected, our algorithm automatically retrieves all other parts likely to share the same material. The key component of our approach is a part encoder that generates a material-aware embedding for each 3D part, accounting for both local geometry and global context. We train our model with a supervised contrastive loss that brings embeddings of material-consistent parts closer while separating those of different materials; therefore, part grouping can be achieved by retrieving embeddings that are close to the embedding of the selected part. To benchmark this task, we introduce a curated dataset of 100 shapes with 241 part-level queries. We verify the effectiveness of our method through extensive experiments and demonstrate its practical value in an interactive material assignment application.

Method

Method overview

Left: Our view selection process renders each part with nearby context from multiple viewpoints sampled randomly over a hemisphere. We choose the one with minimal occlusion, Ictx, and use the same viewpoint to render the part in isolation, Ipart. Ifull captures the entire mesh. We highlight the part with a different color from the rest of the mesh. Right: For each part, its corresponding images are passed through an encoder and their embeddings are concatenated. During training, embeddings of parts with the same material are pulled together, while those with different materials are pushed apart.

Results

Qualitative results

We evaluate Material Magic Wand on a curated benchmark of 100 meshes with 241 part-level queries and compare it against geometry-based histogram matching, vision foundation model embeddings such as SigLIP and DINO, and PartField. Our method achieves the best performance across all reported retrieval and grouping metrics. Qualitatively, our method retrieves components that are both geometrically and contextually similar with the query, while baselines often miss structurally related parts or include visually similar but contextually incorrect ones.

Interactive Demo

BibTeX

@inproceedings{materialmagicwand2026,
  title={Material Magic Wand: Material-Aware Grouping of 3D Parts in Untextured Meshes},
  author={Jain, Umangi and Kim, Vladimir and Gadelha, Matheus and Gilitschenski, Igor and Chen, Zhiqin},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2026}
}