Decision Tree is located under Machine Learning ( ) in Classification, in the task pane on the left. Use drag-and-drop method to use the algorithm in the canvas. Click the algorithm to view and select different properties for analysis.
Refer to Properties of Decision Tree.
The available properties of Decision Tree are as shown in the figure given below.
The table given below describes the different fields present in the properties of Decision Tree.
Field | Description | Remark |
---|---|---|
Run | It allows you to run the node. | - |
Explore | It allows you to explore the successfully executed node. | - |
Vertical Ellipses | The available options are
| - |
Task Name | It displays the name of the selected task. | You can click the text field to edit or modify the name of the task as required. |
Dependent Variable | It allows you to select the dependent variable. | Any one of the available categorical variables can be selected. |
Independent Variable | It allows you to select the experimental or predictor variable(s)— . |
|
Criterion | It allows you to select the Decision-making criterion to be used. | Values are: entropy and gini. |
Maximum Features | Auto, sqrt, log2, None | |
Splitter | It allows you to select the criterion for feature selection. | Values are: Best - It selects the most relevant features of the dataset. Random - It selects random features of the dataset. |
Maximum Depth | It allows you to set the depth of the decision tree. | The more the depth of the tree, the more accurate the prediction. However, more depth also takes more time and computation power. So, it is advisable to choose an optimum depth. |
Random State | ||
Dimensionality Reduction | It is the process of reducing the number of input variables. | The values are:
|