Learning Algorithm
Each tree is constructed using the following algorithm:
- Let the number of training cases be N, and the number of variables in the classifier be M.
- We are told the number m of input variables to be used to determine the decision at a node of the tree; m should be much less than M.
- Choose a training set for this tree by choosing n times with replacement from all N available training cases (i.e. take a bootstrap sample). Use the rest of the cases to estimate the error of the tree, by predicting their classes.
- For each node of the tree, randomly choose m variables on which to base the decision at that node. Calculate the best split based on these m variables in the training set.
- Each tree is fully grown and not pruned (as may be done in constructing a normal tree classifier).
For prediction a new sample is pushed down the tree. It is assigned the label of the training sample in the terminal node it ends up in. This procedure is iterated over all trees in the ensemble, and the mode vote of all trees is reported as random forest prediction.
Read more about this topic: Random Forest
Famous quotes containing the word learning:
“It is no small mischief to a boy, that many of the best years of his life should be devoted to the learning of what can never be of any real use to any human being. His mind is necessarily rendered frivolous and superficial by the long habit of attaching importance to words instead of things; to sound instead of sense.”
—William Cobbett (17621835)