#### 分类强度 $s$

$Q(x,j)$ 是概率 $P_{\Theta}(h(x,\Theta)=j)$ 的袋外估计，我们可以用 $Q(x,y),Q(x,j)$ 来分别表示 $P_{\Theta}(h(\mathbf{x},\Theta)=y),P_{\Theta}(h(\mathbf{x},\Theta)=j)$ 的估计。

#### 相关性 $\overline{\rho}$

$rmg(\Theta,\mathbf{x},y)$ 的分布律如下，

#### OOB ERROR

oob 估计计算流程2

• 对每个样本，计算把它作为 oob 样本的树对它的分类情况（约1/3的树）；
• 然后以简单多数投票作为该样本的分类结果；
• 最后用误分个数占样本总数的比率作为随机森林的 oob 误分率。

Put each case left out in the construction of the kth tree down the kth tree to get a classification. In this way, a test set classification is obtained for each case in about one-third of the trees. At the end of the run, take j to be the class that got most of the votes every time case n was oob. The proportion of times that j is not equal to the true class of n averaged over all cases is the oob error estimate. This has proven to be unbiased in many tests3.