Next Test Recommender (BETA)

Modified on Thu, 30 Nov, 2023 at 10:25 AM

Table of Contents

Description

With this feature test engineers can iteratively optimise their test campaigns by maximizing the value derived from the time allocated to a test campaign or by reducing the time taken to achieve a particular quality of testing. Users can train and evaluate machine learning models, receive recommendations for the next tests to perform, and optimise the testing process.

Please note that this feature is currently in the BETA phase, and users are encouraged to consult with customer support or product teams for further assistance or to report any issues. This feature might be hidden on your environment, if you would like early access to this feature please raise this with your customer support team.


Application

Optimising test campaigns is crucial for efficient resource allocation and improving the quality of testing. This feature leverages machine learning models to make data-driven decisions, guiding users through an iterative process of training models, performing tests, and refining the test process based on the results.


How to use

Pre-requisites

  • Tabular data uploaded to the platform
  • Notebook created and tabular data imported
  • Train a model that is appropriate for your dataset and prediction requirements
  • Performed a model evaluation so that you have a baseline of your model's performance

Initialising the step

  • Within a notebook, add a new step Next Test Recommender from within the Apply group
  • Choose the data you want to get recommendations for in the field Data for evaluation
  • Select which Inputs you wish to get recommendations for in the field Inputs to get recommendations for. By default you should select all of the inputs.
  • Select which output you wish to get recommendations for in the field Outputs to better understand.
  • Select a Source of candidate tests from which to get recommendations. Two options are available:
    Default: Input parameters values within man-max ranges from the model's training dataSelecting this value, the step will automatically create tests between the min and max ranges of your training data. For example, if your data has a column input1 with a min value of 2 and a max value of 10, all recommended tests would be within the range [2,10]. Selecting this option can often help test engineers when they do not have a specific set of tests to choose from.
    Custom: Upload your own list of candidate testsSelecting this option you will be asked to select a dataset that has already been uploaded to the notebook. This dataset should contain a list of tests that you wish the step to use to formulate the next tests to run. If you have not already done this you will need to complete a tabular import of the list of tests then come back to this step and select them here.
  • Select a recommender in the field Choose a recommender - There are multiple recommenders to choose from. Each recommender is made of a unique combination of explorers, which will highlight different regions of interest within your design space. Some will focus on regions with high uncertainty, others on regions where models tend to disagree, others will look at un-sampled clusters of data. As a result, some recommenders might work better for your use case than others, even though it's difficult to know ahead of time which recommender will be the best. We strongly encourage you to use the Next Test Recommender Evaluation step in the platform on a similar dataset in order to identify which recommender is best for you.


  • Now you can select the batch size using the slider. This means that you choose how many tests are recommended to you when you run the step.
    • By default 5 tests will be returned in a table when you have run the step. You can select anywhere between 1 and 50 next tests to recommend, it is important that you run all of the tests in each batch to maximise the value from each round of testing. 
    • If you are unsure how many tests to choose we recommend using the default value.
  • Provide a name for the dataset in the field Output name.

After running the step

Now that you have run the next test recommender step

  • Export the data and perform the recommended tests outside the platform.
  • Prepare the output file to record results by removing index references, creating placeholders for results, and deleting the ranking column.
  • Collect and record test results outside the platform.
  • Upload the results file to the notebook using the File Upload step.
  • Append the results to the notebook training dataset, ensuring not to introduce bias into the test dataset.
  • Run the Next Test Recommender step for additional test recommendations.

IMPORTANT

  • Ensure that new data is appended to the training dataset to avoid introducing bias into the test dataset. If you wish to grow the test dataset, consider running randomly selected tests.
  • This feature is in the BETA testing phase. Users are encouraged to consult with customer support or product teams for further assistance or to report any issues.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article