If a step (in which a model is involved, e.g. model training, model predictions, ...) becomes stale and doesn't respond anymore this can likely be because the server is running out of memory. This typically happens when the involved model is “too large.” In general you should reduce the dimensionality of the problem which can be achieved by one or several of the following actions:
- Reduce model complexity. Read more on Model Complexity in the linked article.
- Reduce the number of inputs and/or outputs.
- If you trained a model on several outputs you could instead train multiple models with just a single output each. If required, you could use Chain Model to create a single model with all outputs combined.
- If running a Hyperparameter Optimisation:
- Reduce the number of combinations that are tested.
- Alternatively, eliminate hyperparameter values which result in especially large models (Example for Neural Network: Remove 500 as option for size of hidden layer).
- Reduce the size of the training dataset.
- For non-series data you could just generate a Random Subset of the dataset. Use Distribution and other statistics tools to make sure that the subset dataset is a fair representation of the original dataset.
- For series data you could down-sample the data. You could use Time Series Restructure for this.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article