Explain Predictions figures out which of your model inputs has the strongest impact on a single model output.
The algorithm involves taking a sample of model's training data, and calculating the 'Shapely values' to determine which input has the strongest impact on the output. The algorithm is non-deterministic, meaning that each time you run the step you are likely to get slightly different results. The impact strengths will have somewhat different values and, potentially, a different order of which are the most impactful.
If you find that the step results are dramatically different each time you run it, there are several possible reasons and possible fixes:
- Your model does not make good predictions. Check this by using model evaluation techniques such as Predicted vs Actual or Error Distribution. If you find your model has poor prediction accuracy, try changing your model choices to improve the predictions.
- Your model is itself non-deterministic. This is true for Neural Network models with Include Uncertainty turned on. If your are using this model, you can try reducing the randomness by lowering the value of the Dropout model choice in the Advanced Options panel.
- You have a large number of input features to your model, or your dataset has a lot of variability. If this is the case, you can try removing some of the model inputs that you expect to be least useful for model predictions. The Explain Predictions algorithm spreads the total computation effort across all input features of the model. Using more input features means that each feature gets analysed for less time which, in turn, can increase the randomness of the final observed impact values.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article