DEV Community

Cover image for ShapeFusion: A 3D diffusion model for localized shape editing
Mike Young
Mike Young

Posted on • Originally published at aimodels.fyi

ShapeFusion: A 3D diffusion model for localized shape editing

This is a Plain English Papers summary of a research paper called ShapeFusion: A 3D diffusion model for localized shape editing. If you like these kinds of analysis, you should subscribe to the AImodels.fyi newsletter or follow me on Twitter.

Introduction

This paper discusses the importance of 3D human bodies and faces in modern digital avatar applications, such as gaming, graphics, and virtual reality. Over the past decade, several methods have been developed to model 3D humans, with parametric models being among the best performing ones. Parametric and 3D morphable models (3DMMs) project 3D shapes into compact, low-dimensional latent representations, usually via PCA, to efficiently capture the essential characteristics and variations of the human shape.

Recent methods have shown that non-linear models, such as Graph Neural Networks and implicit functions, can further improve the modeling of 3D shapes. However, all these methods share a common limitation: an entangled latent space that hinders localized editing. Due to the entangled latent space, parametric models are not human-interpretable, making it difficult to identify latent codes that can control region-specific features.

Figure 1: Illustration of the properties of the proposed method for localized editing (Top) and region sampling (Bottom). Top: The proposed method can manipulate any region of a mesh by simply setting a user-defined anchor point and its surrounding region. The manipulations are completely disentangled and affect only the selected region. The disentanglement of the manipulations is illustrated using the color-coded distances from the previous manipulation step. Bottom: The proposed method can also sample new face parts and expressions by simply defining a mask over the desired region.

Figure 1: Illustration of the properties of the proposed method for localized editing (Top) and region sampling (Bottom). Top: The proposed method can manipulate any region of a mesh by simply setting a user-defined anchor point and its surrounding region. The manipulations are completely disentangled and affect only the selected region. The disentanglement of the manipulations is illustrated using the color-coded distances from the previous manipulation step. Bottom: The proposed method can also sample new face parts and expressions by simply defining a mask over the desired region.

The paper introduces a new technique for localized 3D human modeling using diffusion models. The proposed method extends prior work on point cloud diffusion models to 3D meshes using a geometry-aware embedding layer. The task of localized shape modeling is formulated as an inpainting problem using a masking training approach, where the diffusion process acts locally on the masked regions.

The masking strategy enables learning of local topological features that facilitate manipulation directly on the vertex space and guarantees the disentanglement of the masked from the unmasked regions. The approach also enables conditioned local editing and sculpting by selecting arbitrary anchor points to drive the generation process.

The contributions of the study include:

  1. A training strategy for diffusion models that learns local priors of the underlying data distribution, highlighting the superiority of diffusion models compared to traditional VAE architectures for localized shape manipulations.

  2. A localized 3D model called ShapeFusion that enables direct point manipulation, sculpting, and expression editing directly in the 3D space, providing an interpretable paradigm compared to current methods that rely on the state of the latent code for mesh manipulation.

  3. ShapeFusion generates diverse region samples that outperform current state-of-the-art models and learns strong priors that can substitute current parametric models.

Related Work

The paper discusses disentangled and localized models for 3D shape generation. Disentangled generative models aim to encode underlying factors of variation in data into separate subsets of features, allowing for interpretable and independent control over each factor. Various approaches have been proposed to achieve disentanglement, such as separating shape and pose, learning local facial expression manipulations, and factorizing the latent space. However, achieving spatially disentangled shape editing remains challenging.

The authors propose a method that tackles localized manipulation directly in the 3D space, which is fully interpretable and guarantees spatially localized edits by design, in contrast to prior works that attempt to learn disentangled representations by factorizing the latent space.

The paper also discusses parametric models, which are generative models that enable the generation of new shapes by modifying their compact latent representations. Principal component analysis (PCA) has been widely used in parametric models for faces, bodies, and hands. However, PCA models require a large number of parameters to accurately model diverse datasets.

Recent advancements in diffusion models have revolutionized the field of image generation. The paper mentions several diffusion models applied to 3D shapes, such as learning conditional distributions of point clouds, embedding input point clouds in separate latent spaces, and using deformable tetrahedral grid parametrization. The authors extend a previous diffusion model from point clouds to triangular meshes with fixed topology and enforce localized attribute learning using an inpainting technique during training.

Method

The proposed method addresses the limitations of previous works in achieving fully localized 3D shape manipulation. It introduces a training scheme based on a masked diffusion process and constructs a fully localized model capable of guaranteeing local manipulations in the 3D space. The framework consists of two main components: the Forward Diffusion process, which gradually adds noise to the input mesh, and the Denoising Module, which predicts the denoised version of the input. An overview of the proposed method is illustrated in Figure 2.

Figure 2: <br>
Method overview: We propose a 3D diffusion model for localized attribute manipulation and editing. During forward diffusion step, noise is gradually added to random regions of the mesh, indicated by a mask 𝐌𝐌\mathbf{M}bold_M. In the denoising step, a hierarchical network based on mesh convolution is used to learn a prior distribution of each attribute directly on the vertex space.

Figure 2:
Method overview: We propose a 3D diffusion model for localized attribute manipulation and editing. During forward diffusion step, noise is gradually added to random regions of the mesh, indicated by a mask 𝐌𝐌\mathbf{M}bold_M. In the denoising step, a hierarchical network based on mesh convolution is used to learn a prior distribution of each attribute directly on the vertex space.

The paper introduces a masked forward diffusion process for localized editing of 3D meshes. Noise is gradually added to specific areas of the mesh defined by a mask, while the remaining vertices remain unaffected. This approach guarantees fully localized editing and direct manipulation of any point and region of the mesh.

A denoising module is trained to predict the noise added to the input using a hierarchical mesh convolution layer. This layer allows information propagation between distant regions of the shape and enforces the manipulated regions to respect the unmasked geometry. The network utilizes vertex-index positional encoding to break permutation equivariance and learn a vertex-specific prior.

The hierarchical mesh convolution layer operates on different mesh resolutions, with features calculated recursively from coarser to finer levels. Spiral mesh convolutions are used to define the neighborhood of each vertex uniquely.

Experiments

The paper utilizes three datasets (UHM, STAR, and MimicMe) to train and evaluate the proposed method for disentangled manipulation of 3D face and body meshes. The model is compared against baselines including SD, LED, and a VAE method (M-VAE).

Quantitative evaluation on localized region sampling shows the proposed method outperforms baselines in terms of diversity, identity preservation, and FID score. Qualitative results demonstrate the proposed method achieves localized manipulation without affecting other regions.

A key property is the ability to locally edit regions conditioned on a single anchor point, without requiring optimization like previous methods. This enables direct point manipulation approximately 10 times faster than baselines.

The model can also be used as a generative prior for unconditioned face/body generation by masking the entire shape region. As an autodecoder, it reconstructs sparse inputs well, achieving lower error than PCA and SD methods with just 200 anchor points.

Region swapping between identities is demonstrated, with applications to aesthetic medicine. For localized expression editing, the method is compared to NFR, the state-of-the-art. It achieves fully-localized edits and generalizes to out-of-distribution expressions, while being 20 times faster than NFR's optimization.

Conclusion

The paper presents a diffusion 3D model for localized shape manipulation. The method uses an inpainting-inspired training technique to ensure local editing of selected regions. It outperforms current state-of-the-art disentangled manipulation methods and addresses their limitations. Experiments demonstrate the method's ability to manipulate facial and body parts as well as expressions using single or multiple anchor points. The method serves as an interactive 3D editing tool for digital artists and has applications in aesthetic medicine. The research was supported by EPSRC Projects DEFORM (EP/S010203/1) and GNOMON (EP/X011364).

If you enjoyed this summary, consider subscribing to the AImodels.fyi newsletter or following me on Twitter for more AI and machine learning content.

Top comments (0)