synthetic.rfsrc.Rd
Grows a synthetic random forest (RF) using RF machines as synthetic features. Applies only to regression and classification settings.
Model to be fit. Must be specified unless object
is given.
Data frame containing the y-outcome and x-variables in
the model. Must be specified unless object
is given.
An object of class (rfsrc, synthetic)
.
Not required when formula
and data
are supplied.
Test data used for prediction (optional).
Number of trees.
mtry value for over-arching synthetic forest.
Nodesize value for over-arching synthetic forest.
nsplit-randomized splitting for significantly increased speed.
Sequence of mtry values used for fitting the
collection of RF machines. If NULL
, default is number of
variables divided by 3, rounded up.
Sequence of nodesize values used for the fitting the collection of RF machines.
Minimum forest averaged number of nodes a RF machine must exceed in order to be used as a synthetic feature.
Use fast random forests, rfsrc.fast
, in place of
rfsrc
? Improves speed but may be less accurate.
In addition to synthetic features, should the original features be used when fitting synthetic forests?
Missing value action. The default na.omit
removes the entire record if even one of its entries is NA
.
The action na.impute
pre-imputes the data using fast
imputation via impute.rfsrc
.
Preserve "out-of-bagness" so that error rates and VIMP are honest? Default is yes (oob=TRUE).
Set to TRUE
for verbose output.
Further arguments to be passed to the rfsrc
function used for fitting the synthetic forest.
A collection of random forests are fit using different nodesize values. The predicted values from these machines are then used as synthetic features (called RF machines) to fit a synthetic random forest (the original features are also used in constructing the synthetic forest). Currently only implemented for regression and classification settings (univariate and multivariate).
Synthetic features are calculated using out-of-bag (OOB) data to avoid over-using training data. However, to guarantee that performance values such as error rates and VIMP are honest, bootstrap draws are fixed across all trees used in the construction of the synthetic forest and its synthetic features. The option oob=TRUE ensures that this happens. Change this option at your own peril.
If values for mtrySeq
are given, RF machines are constructed
for each combination of nodesize and mtry values specified by
nodesizeSeq
mtrySeq
.
A list with the following components:
RF machines used to construct the synthetic features.
The (grow) synthetic RF built over training data.
The predict synthetic RF built over test data (if available).
List containing the synthetic features.
Optimal machine: RF machine with smallest OOB error rate.
Ishwaran H. and Malley J.D. (2014). Synthetic learning machines. BioData Mining, 7:28.
# \donttest{
## ------------------------------------------------------------
## compare synthetic forests to regular forest (classification)
## ------------------------------------------------------------
## rfsrc and synthetic calls
if (library("mlbench", logical.return = TRUE)) {
## simulate the data
ring <- data.frame(mlbench.ringnorm(250, 20))
## classification forests
ringRF <- rfsrc(classes ~., ring)
## synthetic forests
## 1 = nodesize varied
## 2 = nodesize/mtry varied
ringSyn1 <- synthetic(classes ~., ring)
ringSyn2 <- synthetic(classes ~., ring, mtrySeq = c(1, 10, 20))
## test-set performance
ring.test <- data.frame(mlbench.ringnorm(500, 20))
pred.ringRF <- predict(ringRF, newdata = ring.test)
pred.ringSyn1 <- synthetic(object = ringSyn1, newdata = ring.test)$rfSynPred
pred.ringSyn2 <- synthetic(object = ringSyn2, newdata = ring.test)$rfSynPred
print(pred.ringRF)
print(pred.ringSyn1)
print(pred.ringSyn2)
}
## ------------------------------------------------------------
## compare synthetic forest to regular forest (regression)
## ------------------------------------------------------------
## simulate the data
n <- 250
ntest <- 1000
N <- n + ntest
d <- 50
std <- 0.1
x <- matrix(runif(N * d, -1, 1), ncol = d)
y <- 1 * (x[,1] + x[,4]^3 + x[,9] + sin(x[,12]*x[,18]) + rnorm(n, sd = std)>.38)
dat <- data.frame(x = x, y = y)
test <- (n+1):N
## regression forests
regF <- rfsrc(y ~ ., dat[-test, ], )
pred.regF <- predict(regF, dat[test, ])
## synthetic forests using fast rfsrc
synF1 <- synthetic(y ~ ., dat[-test, ], newdata = dat[test, ])
synF2 <- synthetic(y ~ ., dat[-test, ],
newdata = dat[test, ], mtrySeq = c(1, 10, 20, 30, 40, 50))
## standardized MSE performance
mse <- c(tail(pred.regF$err.rate, 1),
tail(synF1$rfSynPred$err.rate, 1),
tail(synF2$rfSynPred$err.rate, 1)) / var(y[-test])
names(mse) <- c("forest", "synthetic1", "synthetic2")
print(mse)
## ------------------------------------------------------------
## multivariate synthetic forests
## ------------------------------------------------------------
mtcars.new <- mtcars
mtcars.new$cyl <- factor(mtcars.new$cyl)
mtcars.new$carb <- factor(mtcars.new$carb, ordered = TRUE)
trn <- sample(1:nrow(mtcars.new), nrow(mtcars.new)/2)
mvSyn <- synthetic(cbind(carb, mpg, cyl) ~., mtcars.new[trn,])
mvSyn.pred <- synthetic(object = mvSyn, newdata = mtcars.new[-trn,])
# }