Model validation is an enhancement of publishing a model. You can use this feature to explore the result of the published model for a selected dataset. In model validation, you can use the published model for the selected algorithm with the same advanced parameters, to explore the result for different datasets.
While training and testing an algorithm, you get a trained model upon training an algorithm. The trained model is tested on different datasets. Using this feature, you can validate the trained and published model to explore the result for a selected dataset. In Rubiscape, different versions of trained models are created depending on the number of times the algorithm is trained for the selected dataset. Using model validation, you can also validate each version of a published model. You can then choose the version of the published model which performs better for the selected dataset.
For publishing a model, refer to Publishing a Model.
To validate the published model on a different dataset, follow the steps given below.
In the right pane, click Validate.
Select the Dependent Variable and the Independent Variables for the dataset.
Notes: |
|
Click Run.