Autoencoder for Structured Meshes

Modified on Wed, 19 Jul 2023 at 09:56 AM


Encode the geometries of a 3D dataset on a latent space whose size can be defined by the user. This step returns both an encoder model (to parametrise a geometry) and a decoder model (to generate a geometry from numerical parameters).

How does this differ from the "Autoencoder: Parts 1, 2, & 3" steps?

The Autoencoder for Structured Meshes is more restrictive, and can only be used on meshes which have the same structure: the meshes must have the same number of vertices per mesh, and the and the vertices must be sorted in the same order. The generic Autoencoder Parts 1-3 steps can be used on any dataset.


Although machine learning can be used for a wide range of applications, it is currently mainly used on tabular or numerical data. Models like random forests or neural networks learn from numbers to predict numbers. However, for engineering applications, geometry files are one of the most widely used data format. Sometimes, these geometries are described by numerical parameters on which models can be applied. But it is often the case that the geometry file is the only information available.

Autoencoders enable to “quantify” geometries and their features, even when they haven’t been parametrised before. This make them suitable for machine learning applications, and allow the generation of completely new geometries.

How to use

This function requires 3D data to work. Once the desired data set is selected in the 3D Data field, some parameters should be decided:

Training Steps
This parameter defines the length of the training phase. More steps means a longer training time, but the model will be likely to converge and be more accurate.
Number of Latent Parameters
This parameter defines the number of parameters chosen to compress the geometries. There will be a trade off here as a higher value will probably provide better accuracy, but the model will require more data to be trained properly.
Stop model early if it converges
It might be that after a number of steps, the model already converged. However, if you asked for more steps, the model will continue to train for longer while it's not necessary. Ticking this option will stop training a model when it has converged and will prevent you from waiting unnecessarily.

Once trained, the model consists of two parts (see the last section for more information):

  • The encoder can be used to transform a geometry in a list of latent parameters.
  • The decoder can be used to generate a new geometry from a list of latent parameters. These parameters can be set manually or identified through an optimisation process.

To assess the quality of this model, the Assess autoencoder step can be used to calculate and look at the reconstruction error. The autoencoder can also be used for the View Decoded Designs, where new geometries can be created instantaneously by modifying the latent parameters.

Read here to learn more about how to assess an autoencoder.

More on this step

An auto-encoder is a mathematical tool that shares some similarities with a neural network as it contains different layers. When training an auto-encoder, the main aim is to be able to “compress” the data from the full geometry (X, Y, Z for every point) to a relatively small set of parameters, and then being able to “decompress” the data back to the initial geometry. As a result, an auto-encoder will be made of two parts: (i) an Encoder, which transform the geometry into parameters, and (ii) a Decoder, which transforms the parameters into geometries.  

The main layers are:

  • An input layer (in blue), which gathers all the inputs (generally X, Y, Z coordinates) of the design.
  • A latent layer containing the latent parameters (in red). The size of this layer corresponds to defined the number of latent parameters to represent all 3D designs.
  • An output layer (in yellow), which is similar to the input layer.

If the latent layer is too small, it won’t be able to “compress” the designs properly, and the reconstructed designs (output) will not be similar to the initial design (input layer). The aim is to find a number of parameters as small as possible, but big enough to contain all the information of the design. These parameters can then be used for new predictions or optimisation (see Tutorial series 3).

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select atleast one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article