X2
Linear
Composite Outcome
X3
X4
X1 LinComp
X2 LinComp
New
Composite Outcome Y XB E where B WQ
X3 LinComp
X4 LinComp
Partial Least Squares
Partial Least Squares is just like PC Regression except
in how the component scores are computed
PC regression = weights are calculated from the
covariance matrix of the predictors
PLS = weights reflect the covariance structure between
predictors and response
While conceptually not too much of a stretch, it requires a more
complicated iterative algorithm
Nipals and SIMPLS algorithms probably most common
Like in regression, the goal is to maximize the correlation
between the response(s) and component scores
Example
Download the PCA R code again
Requires the pls package
Do consumer ratings of various beer
aspects associate1 with their SES?
Multiple regression
All are statistically Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.534025 0.134748 3.963 0.000101 ***
significant correlates of ALCOHOL
AROMA
-0.004055
0.036402
0.001648 -2.460 0.014686 *
0.001988 18.310 < 2e-16 ***
accounted for
ALCOHOL -0.534 -0.110
REPUTAT 0.246 -0.221 -0.890
PC regression
TASTE 0.236 0.410 0.146
MR
(Intercept) 0.534
COST -0.002
SIZE -0.044
ALCOHOL -0.004
REPUTAT 0.014
AROMA 0.036
COLOR 0.008
TASTE 0.036
(Intercept) 2.500
COST -0.022
SIZE -0.017
PCA ALCOHOL
REPUTAT
-0.018
-0.000
AROMA 0.023
COLOR 0.024
TASTE 0.022
(Intercept) 0.964
COST -0.002
SIZE -0.034
PLS ALCOHOL
REPUTAT
-0.017
0.012
AROMA 0.027
COLOR 0.019
TASTE 0.031
Why PLS?
PLS can extends to multiple outcomes and allows for
dimension reduction
Less restrictive in terms of assumptions than MR
Distribution free
No collinearity
Independence of observations not required
Unlike PCR it creates components with an eye to the
predictor-DV relationship
Unlike Canonical Correlation, it maintains the predictive
nature of the model
While similar interpretation is possible, depending on
your research situation and goals, any may be viable
analyses