There are a few methods to assess an auto-encoder, and they are complementary. We strongly suggest not to limit yourself to only one of those methods. (For more information on autoencoders, click here for structured autoencoders and here for unstructured autoencoders).
Learning curves
The first way to assess an auto-encoder is to look at the learning curve (visible in Autoencoder for Structured Meshes and in Autoencoder: Parts 2 & 3). These curves are showing how the loss value evolves with the training, and are working in a very similar way to the ones displayed when training a neural network: the lower the value, the better the model, but you don’t want a gap to appear between the training and validation curves (see here for more details). This can help to quickly identify if a model will be good or not and compare models as they train.
Using the step “Assess Autoencoder”
This step will evaluate the reconstruction error, which is the error (distance) between a geometry and that same geometry reconstructed by the model.
There are two ways to evaluate this error:
- Either visually, but superimposing the original and reconstructed values for a selected design in the data set (here, original geometry in blue, and reconstructed geometry in red),
- Or more quantitatively, by computing in a table the average reconstruction error for each design of the data set. This can be used to compare models quantitatively by comparing their average reconstruction error on a test set.
Exploring the distribution of latent parameters
Although the method above demonstrates the ability for the auto-encoder to reconstruct a geometry, it doesn’t fully validate the auto-encoder. Another way to check that an auto-encoder was trained properly is to look at the distribution of the latent parameters for the data set it was trained on. The model was made in such a way that these parameters should:
- Have a normal distribution,
- Be centred around zero,
- Cover the range [-1,1].
Although models will never satisfy those criteria perfectly, it is possible to see if a model is very far of. To do so, you can click on the “encode” step and use the auto-encoder to calculate the latent values for the training set.
You can then plot the distributions of each latent parameter. Below are some examples of distributions and a quick explanation for each of them:
a. Distributions are really narrow, concentrated around one or multiple specific values. The model has not converged. | |
b. Distributions have a wider range, although not always centred around 0. The model is better but can still be improved. | |
c. Distribution have a even wider range, close to [-1, 1], and fairly centred around 0. The model has converged. |
A good distribution of the latent parameters is an indication that the training as converged. This might help when the model overfits (does not converge), but still manages to reconstruct some geometries well. If latent distributions have not converged, a few options would be to:
- train the model with more data,
- train the model with less latent parameters,
- train the model for longer (more steps).
An alternative would be to use the step “View Decoded Designs” and vary the latent values within the range [-1, 1] and see if the geometries generated are realistic. For example, for the distribution on the right side of figure a, if the selected value is not exactly around -0.1, the generated geometry will probably not look realistic.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article