An extra-trees regressor.
This class implements a meta estimator that fits a number of randomized decision trees (a.k.a. extra-trees) on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting.
Parameters: | n_estimators : integer, optional (default=10)
criterion : string, optional (default=”mse”)
max_features : int, float, string or None, optional (default=”auto”)
max_depth : integer or None, optional (default=None)
min_samples_split : integer, optional (default=2)
min_samples_leaf : integer, optional (default=1)
min_weight_fraction_leaf : float, optional (default=0.)
max_leaf_nodes : int or None, optional (default=None)
bootstrap : boolean, optional (default=False)
oob_score : bool
n_jobs : integer, optional (default=1)
random_state : int, RandomState instance or None, optional (default=None)
verbose : int, optional (default=0)
warm_start : bool, optional (default=False)
output_transformer : scikit-learn transformer or None (default),
|
---|
See also
References
[R4] | P. Geurts, D. Ernst., and L. Wehenkel, “Extremely randomized trees”, Machine Learning, 63(1), 3-42, 2006. |
Attributes
feature_importances_ | Return the feature importances (the higher, the more important the feature). |
estimators_ | list of DecisionTreeRegressor | The collection of fitted sub-estimators. |
oob_score_ | float | Score of the training dataset obtained using an out-of-bag estimate. |
oob_prediction_ | array of shape = [n_samples] | Prediction computed with out-of-bag estimate on the training set. |
Methods
apply(X) | Apply trees in the forest to X, return leaf indices. |
fit(X, y[, sample_weight]) | Build a forest of trees from the training set (X, y). |
fit_transform(X[, y]) | Fit to data, then transform it. |
get_params([deep]) | Get parameters for this estimator. |
predict(X) | Predict regression target for X. |
score(X, y[, sample_weight]) | Returns the coefficient of determination R^2 of the prediction. |
set_params(**params) | Set the parameters of this estimator. |
transform(*args, **kwargs) | DEPRECATED: Support to use estimators as feature selectors will be removed in version 0.19. |
Apply trees in the forest to X, return leaf indices.
Parameters: | X : array-like or sparse matrix, shape = [n_samples, n_features]
|
---|---|
Returns: | X_leaves : array_like, shape = [n_samples, n_estimators]
|
Returns: | feature_importances_ : array, shape = [n_features] |
---|
Build a forest of trees from the training set (X, y).
Parameters: | X : array-like or sparse matrix of shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or [n_samples, n_outputs]
sample_weight : array-like, shape = [n_samples] or None
|
---|---|
Returns: | self : object
|
Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.
Parameters: | X : numpy array of shape [n_samples, n_features]
y : numpy array of shape [n_samples]
|
---|---|
Returns: | X_new : numpy array of shape [n_samples, n_features_new]
|
Get parameters for this estimator.
Parameters: | deep: boolean, optional :
|
---|---|
Returns: | params : mapping of string to any
|
Predict regression target for X.
The predicted regression target of an input sample is computed as the mean predicted regression targets of the trees in the forest.
Parameters: | X : array-like or sparse matrix of shape = [n_samples, n_features]
|
---|---|
Returns: | y: array of shape = [n_samples] or [n_samples, n_outputs] :
|
Returns the coefficient of determination R^2 of the prediction.
The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) ** 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) ** 2).sum(). Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.
Parameters: | X : array-like, shape = (n_samples, n_features)
y : array-like, shape = (n_samples) or (n_samples, n_outputs)
sample_weight : array-like, shape = [n_samples], optional
|
---|---|
Returns: | score : float
|
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns: | self : |
---|
DEPRECATED: Support to use estimators as feature selectors will be removed in version 0.19. Use SelectFromModel instead.
Reduce X to its most important features.
Uses coef_ or feature_importances_ to determine the most important features. For models with a coef_ for each class, the absolute sum over the classes is used.
Parameters: | X : array or scipy sparse matrix of shape [n_samples, n_features]
|
---|---|
Returns: | X_r : array of shape [n_samples, n_selected_features]
|