Description
Autoencoders are powerful AI models which automatically parameterise a dataset of 3D designs. The autoencoder learns to compress the geometric shape information in your designs into a small number of latent parameters. The latent space parameters are then like the DNA of your 3D designs: every existing design has a specific set of latent space values; and similar unseen designs can be generated by exploring new sets of latent space values.
How does this differ from the "Autoencoder for Structured Meshes" step?
The generic Autoencoder Parts 1-3 steps can be used on any dataset. The Autoencoder for Structured Meshes is more restrictive, and can only be used on meshes which have the same structure: the meshes must have the same number of vertices per mesh, and the and the vertices must be sorted in the same order.
Application
Autoencoders have a wide range of applications. In the Monolith platform, Autoencoders are commonly used for two applications. First, generating new 3D designs along with instant performance prediction. Secondly, and finding the optimal 3D design for your use case. For either application, you should follow the steps below to train your Autoencoder model.
After running Autoencoder Parts 1-3, you will have both a Decoder and an Encoder. The Encoder model automatically parameterises the geometric information of new 3D shapes into the latent space parameters. The Decoder model uses new combination of latent space parameters to generate new 3D meshes.
How to use
Autoencoder Part 1 | This step converts the meshes in your dataset into a voxelized (pixelized) state.
Press Apply. After running Part 1, you should always visually inspect the resulting voxelized meshes. You can do this by clicking on the underlined dataset name in the step description, for instance “Generating 3D Voxelized Data 3D Voxelized Data”. You should ensure that the shapes of the voxelized meshes match your original data closely enough. If they do not, you should change the Projection Direction and Fill Holes options and repeat your visual inspection. When you are happy that the voxelized meshes are a fair representation of the shapes in your dataset, you can continue to Part 2. |
Autoencoder Part 2 | This step learns the geometry from the voxelized meshes.
Press Apply. Depending on the size of your data, this step can take several hours to complete. |
Autoencoder Part 3 | This step compresses the geometries to the latent space parameters.
Press Apply. This step will output the 3D Decoder and 3D Encoder models which constitute the autoencoder model. |
To know more about how to assess an autoencoder, click here.
Examples
Autoencoder Part 1 should convert your meshes to a voxelized format. The voxelized meshes will look ‘boxy’. For instance, the wheel design in the left image below has been voxelized into the right image (using Projection Direction = Disabled):
The choice of Projection Direction and Fill Holes can affect how the meshes are voxelized, especially if there are gaps or discontinuities in your meshes. You should try all choices for this parameter and visually inspect the results, to which voxelized meshes appear to retain the important shape features in your meshes.
Projection Direction = X-axis or Y-axis The algorithm has projected horizontally, and so the top and bottom of the drum have been filled with voxels. Although it is not seen here, the centre of the wheel is still hollow. | Projection Direction = Z-axis The algorithm has projected vertically. This is similar to the Projection Direction = Disabled option above, but note that the mesh here has voxel ‘stripe’ features along the edges of the wheel that are not present in the earlier mesh. |
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article