Predicted vs. Actual

Modified on Tue, 28 Mar, 2023 at 11:51 AM

Description

This step enables to compare model predictions easily and visually on a test set and see how close or far off they are from the true values.


Application

Once models are trained, it is not always clear whether they are “good enough” to be used in production or which model is the best. This step enables to compare model predictions easily and visually and helps the user deciding which model can be used at a later stage in their process.


How to use

You need at least one model and one data set to use this step.

  • Select a Data set. For the results to be meaningful, it is recommended to use a data set that wasn’t used to train the models (e.g. a test set).
  • Select one or multiple Models. Make sure that these models have at least one shared output.
  • Select an Output. If no outputs are suggested, it is because models do not have any output in common.
  • Click Apply.

The step will return a graph like the one below, and here is how to read it:

  • The x-axis is the true value of the output (the one coming from the data set)
  • The y-axis is the predicted value of the output (the one coming from the model(s) being evaluated),
  • For a perfect model, the two values would always be the same (shown by the black dotted line y=x),
  • For each prediction, the error is the vertical distance between the point and the line of perfect predictions.
  • For a good model, most of the points will be close to the perfect prediction line.

In this example, you can see for instance that the model is doing very well for low output values (see low errors for points 1 and 2), but not very well for high output values (see high errors for points 3 and 4).


Examples

In this example, 3 models were trained to predict a drag value. From the graph below, you can make the following conclusions:

  • The Neural Network 1 is not good as a lot of points are away from the perfect prediction line. The model gets worst as the drag value increases,
  • The Linear Regression is fairly good for values of the drag between 0.2 and 0.4, but it’s not good for both low and large values of the drag. In particular, you can see that this model will consistently underpredict large values of the drag,
  • The Neural Network 2 is really good as predictions are close to the perfect prediction line for the entire range of drag values.


More on this step

The step can also be used to identify the reason for bad predictions. You can use the lasso tool at the top to select some points on the graph, and those points will be displayed in a table on the right side. In the example below, the selection showed that the poor predictions on the bottom right part of the graph all had a large angle of attack.


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article