Fuses a base learner with a filter method. Creates a learner object, which can be used like any other learner object. Internally uses filterFeatures before every model fit.

After training, the selected features can be retrieved with getFilteredFeatures.

Note that observation weights do not influence the filtering and are simply passed down to the next learner.

makeFilterWrapper(learner, fw.method = "randomForestSRC.rfsrc",
  fw.perc = NULL, fw.abs = NULL, fw.threshold = NULL,
  fw.mandatory.feat = NULL, ...)



(Learner | character(1))
The learner. If you pass a string the learner will be created via makeLearner.


Filter method. See listFilterMethods. Default is “randomForestSRC.rfsrc”.


If set, select fw.perc*100 top scoring features. Mutually exclusive with arguments fw.abs and fw.threshold.


If set, select fw.abs top scoring features. Mutually exclusive with arguments fw.perc and fw.threshold.


If set, select features whose score exceeds fw.threshold. Mutually exclusive with arguments fw.perc and fw.abs.


Mandatory features which are always included regardless of their scores


Additional parameters passed down to the filter.



See also


task = makeClassifTask(data = iris, target = "Species") lrn = makeLearner("classif.lda") inner = makeResampleDesc("Holdout") outer = makeResampleDesc("CV", iters = 2) lrn = makeFilterWrapper(lrn, fw.perc = 0.5) mod = train(lrn, task)
#> Error: Please use column names for `x`
#> Error in getFilteredFeatures(mod): object 'mod' not found
# now nested resampling, where we extract the features that the filter method selected r = resample(lrn, task, outer, extract = function(model) { getFilteredFeatures(model) })
#> Resampling: cross-validation
#> Measures: mmce
#> [Resample] iter 1: 0.0400000
#> [Resample] iter 2: 0.0533333
#> Aggregated Result: mmce.test.mean=0.0466667
#> [[1]] #> [1] "Petal.Length" "Petal.Width" #> #> [[2]] #> [1] "Petal.Length" "Petal.Width" #>