Autoencoder: Parts 1, 2, & 3

Modified on Fri, 16 Dec, 2022 at 1:38 PM

Description

Autoencoders are powerful AI models which automatically parameterise a dataset of 3D designs. The autoencoder learns to compress the geometric shape information in your designs into a small number of latent parameters. The latent space parameters are then like the DNA of your 3D designs: every existing design has a specific set of latent space values; and similar unseen designs can be generated by exploring new sets of latent space values.

How does this differ from the "Autoencoder for Structured Meshes" step?

The generic Autoencoder Parts 1-3 steps can be used on any dataset. The Autoencoder for Structured Meshes is more restrictive, and can only be used on meshes which have the same structure: the meshes must have the same number of vertices per mesh, and the and the vertices must be sorted in the same order.


Application

Autoencoders have a wide range of applications. In the Monolith platform, Autoencoders are commonly used for two applications. First, generating new 3D designs along with instant performance prediction. Secondly, and finding the optimal 3D design for your use case. For either application, you should follow the steps below to train your Autoencoder model.

After running Autoencoder Parts 1-3, you will have both a Decoder and an Encoder. The Encoder model automatically parameterises the geometric information of new 3D shapes into the latent space parameters. The Decoder model uses new combination of latent space parameters to generate new 3D meshes.


How to use

Autoencoder Part 1

This step converts the meshes in your dataset into a voxelized (pixelized) state.

  • Select the name of your 3D Data for the model to learn from
  • Choose Projection Direction – leave as Disabled if you are not sure
  • Decide whether you want to Fill Holes, which controls filling of gaps in your meshes. 

Press Apply. After running Part 1, you should always visually inspect the resulting voxelized meshes. You can do this by clicking on the underlined dataset name in the step description, for instance “Generating 3D Voxelized Data 3D Voxelized Data”. You should ensure that the shapes of the voxelized meshes match your original data closely enough. If they do not, you should change the Projection Direction and Fill Holes options and repeat your visual inspection.

When you are happy that the voxelized meshes are a fair representation of the shapes in your dataset, you can continue to Part 2.

Autoencoder Part 2

This step learns the geometry from the voxelized meshes.

  • Select the name of the 3D Voxelized Data from your Autoencoder Part 1 step
  • Choose how many Training Steps to use – leave as the default if you are not sure
  • Select a Mesh Type – select the option which best describes your meshes.

Press Apply. Depending on the size of your data, this step can take several hours to complete.

Autoencoder Part 3

This step compresses the geometries to the latent space parameters.

  • Select the Voxelized Data from Part 1
  • Select the Intermediate Autoencoder Model from Part 2
  • Choose the size of your latent space – values in the range 2-5 are often a good place to start when training a new model.

Press Apply. This step will output the 3D Decoder and 3D Encoder models which constitute the autoencoder model.

To know more about how to assess an autoencoder, click here.


Examples

Autoencoder Part 1 should convert your meshes to a voxelized format. The voxelized meshes will look ‘boxy’. For instance, the wheel design in the left ima­ge below has been voxelized into the right image (using Projection Direction = Disabled):

­­The choice of Projection Direction and Fill Holes can affect how the meshes are voxelized, especially if there are gaps or discontinuities in your meshes. You should try all choices for this parameter and visually inspect the results, to which voxelized meshes appear to retain the important shape features in your meshes.


Projection Direction = X-axis or Y-axis

The algorithm has projected horizontally, and so the top and bottom of the drum have been filled with voxels. Although it is not seen here, the centre of the wheel is still hollow.


Projection Direction = Z-axis

The algorithm has projected vertically. This is similar to the Projection Direction = Disabled option above, but note that the mesh here has voxel ‘stripe’ features along the edges of the wheel that are not present in the earlier mesh.


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article