1

I ran a partial least squares (PLS) in R and I want to extract the variables so that I can run either a decision tree or random forest or some other type of model.

I tried pls1$coefficients

# split data into 2 parts for pls training (75%) and prediction (25%)
set.seed(1)
samp <- sample(nrow(newdata), nrow(newdata)*0.75)
analogous.train <- newdata[samp,]
analogous.valid <- newdata[-samp,]

#First use cross validation to find the optimal number of dimensions
pls.model = plsr(meanlog ~ ., data = analogous.train, validation = "CV")

# Find the number of dimensions with lowest cross validation error
cv = RMSEP(pls.model)
best.dims = which.min(cv$val[estimate = "adjCV", , ]) - 1
best.dims

#This told me that 8 dimensions was the best

#Now fit a model with 8 components and includes leave one out cross 
#validated predictions
pls1 <- plsr(meanlog ~ ., ncomp = best.dims, data = analogous.train, 
validation = "LOO")

#a fited model is often used to predict the response values of new 
#observations.
predict(pls1, ncomp = 8, newdata = analogous.valid)

I want the actual variables itself that it created. For example PCA creates PC1, PC2, etc. I assumed (maybe incorrectly) that PLS does the same.

westiegirl
  • 11
  • 4

1 Answers1

2

It's in $scores. If you do

pls1$scores

You will get a matrix of 8 columns, scores for each of the latent variables, and a row for each observation.

Joe
  • 8,073
  • 1
  • 52
  • 58