Anda di halaman 1dari 145

Introduction to Structural Equation

Modeling for Survey Research


Introduction to Structural Equation
Modeling for Survey Research
T. Ramayah
School of Management
Universiti Sains Malaysia
http://www.ramayah.com
Publish or Perish


Structural Equation Modeling
Structural Equation Modeling . . . is a family of
statistical models that seek to explain the relationships
among multiple variables.
It examines the structure of interrelationships
expressed in a series of equations, similar to a series
of multiple regression equations.
These equations depict all of the relationships among
constructs (the dependent and independent variables)
involved in the analysis.
Constructs are unobservable or latent factors that are
represented by multiple variables.
Called 2nd Generation Techniques

1
st
vs 2
nd
Generation Technique
Structural Equation Modeling
Distinguishing Features of SEM
Compared to 1
st
Generation Techniques
It takes a confirmatory rather than
exploratory
Traditional methods incapable of either
assessing or correcting for measurement
errors
Traditional methods use observed
variables, SEM can use both unobserved
(latent) and observed variables
Testing in one complete model
Components of Error
Observed score comprises of 3
components (Churchill, 1979)
True score
Random error (ex; caused by the order of
items in the questionnaire or respondent
fatigue) (Heeler & Ray, 1972)
Systematic error such as method variance (ex;
variance attributable to the measurement
method rather than the construct of interest)
(Bagozzi et al., 1991)
SEM
SEM, as a second-generation technique, allows the
simultaneous modeling of relationships among
multiple independent and dependent constructs
(Gefen, Straub, & Boudreau, 2000). Therefore, one
no longer differentiates between dependent and
independent variables but
distinguishes between the exogenous and endogenous
latent variables, the former being variables which are not
explained by the postulated model (i.e. act always as
independent variables) and the latter being variables that are
explained by the relationships contained in the model.
(Diamantopoulos, 1994, pp. 108)
oExogenous constructs are the latent, multi-item
equivalent of independent variables. They use a
variate (linear combination) of measures to represent
the construct, which acts as an independent variable in
the model.
oMultiple measured variables (x) represent the exogenous
constructs.
oEndogenous constructs are the latent, multi-item
equivalent to dependent variables. These constructs
are theoretically determined by factors within the model.
oMultiple measured variables (y) represent the endogenous
constructs.
Structural Equation Modeling Defined
SEM - Variations
CB-SEM (Covariance-based SEM)
objective is to reproduce the theoretical
covariance matrix, without focusing on
explained variance.

PLS-SEM (Partial Least Squares
SEM) objective is to maximize the
explained variance of the endogenous
latent constructs (dependent variables).


Two approaches to SEM
Covariance based
EQS, http://www.mvsoft.com/
AMOS, http://www-01.ibm.com
SEPATH, http://www.statsoft.com
LISREL, http://www.ssicentral.com/
MPLUS, http://www.statmodel.com/
lavaan, http://lavaan.ugent.be/
nyx, http://onyx.brandmaier.de/
Two approaches to SEM
Variance Based SEM
Smart PLS, http://www.smartpls.de/forum/
PLS Graph, http://www.plsgraph.com/
WarpPLS, http://www.scriptwarp.com/warppls/
Visual PLS, http://fs.mis.kuas.edu.tw/~fred/vpls/start.htm
PLS-GUI, http://www.rotman-
baycrest.on.ca/index.php?section=84
SPAD-PLS,
http://spadsoft.com/content/blogcategory/15/34/
GeSCA, http://www.sem-gesca.org/

Reasons for using PLS
Researchers arguments for choosing PLS as the
statistical means for testing structural equation
models (Urbach & Ahleman, 2010) are as follows:
PLS makes fewer demands regarding sample size than other
methods.
PLS does not require normal-distributed input data.
PLS can be applied to complex structural equation models
with a large number of constructs.
PLS is able to handle both reflective and formative
constructs.
PLS is better suited for theory development than for theory
testing.
PLS is especially useful for prediction
Choice
Overall, PLS can be an adequate alternative to CBSEM if the
problem has the following characteristics (Chin 1998b; Chin &
Newsted 1999):
The phenomenon to be investigated is relatively new and
measurement models need to be newly developed,
The structural equation model is complex with a large number of
LVs and indicator variables,
Relationships between the indicators and LVs have to be
modelled in different modes (i.e., formative and reflective
measurement models),3
The conditions relating to sample size, independence, or normal
distribution are not met, and/or
Prediction is more important than parameter estimation.
Choice
Choice
Comparison
Satisfaction
X
1
X
2
X
3
X
4

Intention
Y
1
Y
2
Y
3
Y
4

Two Latent Constructs and the
Measured Variables
Loadings represent the relationships from constructs to
variables as in factor analysis.
Path estimates represent the relationships between
constructs as does B in regression analysis.
A Measurement Model can
be represented with Type A,
B and D
relationships.
The Structural Model
includes all types of
relationships.
Distinguishing the Types of
Relationships Involved in SEM
X
Exogenous
Y
or
A. Relationship Between
a Construct and a
Measured Variable
Endogenous
C. Dependence Relationship
Between Two Constructs
(Structural Relationship)
Construct 1 Construct 2
D. Correlational
Relationship
Between Constructs
Construct 2
Construct 1
X
2

Exogenous
B. Relationship Between a
Construct and Multiple
Measured Variables
X
1

X
3

Causal Inference
Hypothesizes a cause-and-
effect relationship.
Establishing Causation Causal
Modeling
1. Covariation
2. Sequence
3. Nonspurious Covariance
4. Theoretical Support
1. Covariation
Causality means a change in a cause brings a
corresponding change in an effect
Systematic covariance (correlation) between cause
and effect is necessary but not sufficient to establish
causality
In regression we test the statistical significance of
the coefficients
In SEM, statistically significant paths in the structural
model provide evidence for covariation
Structural relationships are paths for which causal
inferences are hypothesized
2. Sequence
There must be temporal sequence
Increase in advertisement leads to more
sales
Advertisement must be done first, if sales
increases before advertisement then there
is no sequence
The domino effect

3. Non spurious covariation
A relationship is considered spurious when
another event not included in the analysis
actually explains both the cause and effect
There is a significant relationship between
ice cream consumption and drowning
Testing Spurious Effect
Original
Alternative
Supervisor Job
Satisfaction
Supervisor Job
Satisfaction
Working
Conditions
0.50
0.50
0.30
0.01
4. Theoretical Support
Simply testing a SEM model cannot establish
causality
Theory must be used to establish a causal
ordering and rationale for the observed
covariance
Justification is employees spend more time with
their supervisors, they become familiar with their
supervision approach, increases understanding
and based on these experiences they become
more satisfied.

Basics of SEM Estimation
SEM explains the observed covariance among a set
of measured variables:
It does so by estimating the observed covariance matrix with
an estimated covariance matrix constructed based on the
estimated relationships among variables.
Observed
Covariance
Matrix
Estimated
Covariance
Matrix
S
k

The closer these are, the
better the fit. When they
are equal, the fit is
perfect.

Structural Equation Modeling Stages
Stage 1: Defining Individual Constructs
Stage 2: Developing the Overall Measurement Model
Stage 3: Designing a Study to Produce Empirical
Results
Stage 4: Assessing the Measurement Model Validity
Stage 5: Specifying the Structural Model
Stage 6: Assessing Structural Model Validity
Structural Equation Modeling
No model should be developed for use
with SEM without some underlying theory.
Theory is needed to develop both the . . .

o Measurement model specification.
o Structural model specification.
Structural Equation Modeling
Models can be represented visually with a
path diagram.
o Dependence relationships are represented
with single headed directional arrows.
o Correlational (covariance) relationships are
represented with two-headed arrows.
General Notation
Latent Variable
Observed/Measured Variable
Measurement error
Residual error
Path coefficient measured latent
Path coefficient latent measured
Stage 1: Defining Individual
Constructs
Operationalizing the Constructs
Scales from Prior Research
New Scale Development
Pretesting (See Hunt et al. 1982)
What items?
What method?
Who should do?
Who are the subjects?
How large a sample?
Pre-testing
Pretesting (See Hunt et al. 1982)
What items?
Length, layout, format, number of lines for
replies, sequencing
Individual questions, respondents hesitate
Dummy tables and analysis (dry run)
What method?
Personal interviews, phone, and mail
Debriefing (after) or protocol (during)?


Pre-testing
Who should do?
Best interviewers
Who are the subjects?
Respondents who are as similar as possible
Representative vs convenience
How large a sample?
Vary from 12, 20, 30 to 100
Other Issues
Non-Response
Common Method Variance (CMV)
Social Desirability
Missing Value Imputation
Common Method Variance
Social Desirability Measure
Fischer and Fick (1993) shortened version (X1) of
Crowne and Marlowe (1960) Social Desirability Scale
I like to gossip at times
There have been occasions where I took advantage of someone
I'm always willing to admit it when I made a mistake
I sometimes try to get even rather than forgive and forget
At times I have really insisted on having things my own way
I have never been irked when people expressed ideas very
different from my own
I have never deliberately said something that hurt someone's
feeling
Testing Common Method Variance
Harmans Single factor test


Using Social Desirability

Explanation CMV Example
We performed two tests to examine the common
method bias. First, we performed an exploratory factor
analysis by entered all measurement items, the results
showed that the largest variance explained by an
individual factor was 36.14%.

Podsakoff and Organ (1986) claimed that if the
variables all load on one factor or one factor explains
the majority of the variance, common method variance
may be a problem. The results show that neither a
single factor nor a general factor accounts for the
majority of the covariance in the measures.

Explanation
Second, we performed a confirmatory factor
analysis by modelling all items as the
indicators of a single factor, and the results
show a poor fitness. Method biases are
assumed to be substantial if the hypothesized
model fits the data (Malhotra, Kim, & Patil,
2006). Thus, the results of both tests indicate
that common method bias is not a significant
problem for the current study.

Missing Value Imputation
Traditional
No replacement
Mid point of the scale
Random number
Mean value of the other respondents
Mean value of the other responses
Current
FIML
EM
MI
Missing Value Imputation
Can the validity and unidimensionality of
the constructs be supported?
How many indicators for each construct?
Is the measurement model reflective or
formative?
Stage 2: Developing the Overall
Measurement Model
VAR
X2
e2
1
1
X1
e1
1
S X1 X2
X1 var(1) cov(2,1)
X2 cov(1,2) var(2)
Under-identified Model 2 items
Bits of Information = [p(p + 1)]
Where p = number of measured items
Just-identified Model 3 items
VAR
X2
e2
1
1
X1
e1
1
X3
e3
1
S X1 X2 X3
X1 var(1) cov(1,2) cov(1,3)
X2 cov(1,2) var(2) cov(2,3)
X3 cov(1,3) cov(2,3) var(3)
Over-identified Model 4 items
VAR
X2
e2
1
1
X1
e1
1
X3
e3
1
X4
e1
1
S X1 X2 X3 X4
X1 var(1) cov(1,2) cov(1,3) cov(1,4)
X2 cov(1,2) var(2) cov(2,3) cov(2,4)
X3 cov(1,3) cov(2,3) var(3) cov(3,4)
X4 cov(1,4) cov(2,4) cov(3,4) var(4)
Indicators
Formative




X1 = Job loss
X2 = Divorce
X3 = Recent accident
Indicators can have +, - or
0 correlation (Hulland,
1999)
Reflective




X1 = Accommodate last minute request
X2 = Punctuality in meeting deadlines
X3 = Speed of returning phone calls
Indicators must be highly
correlated (Hulland,
1999)



LIFE STRESS
X1 X2 X3

TIMELINESS
X1 X2 X3
Reflective (Scale) Versus Formative
(Index) Operationalization of Constructs
A central research question in social science research, particularly
marketing and MIS, focuses on the operationalization of complex constructs:

Are indicators causing or being caused by
the latent variable/construct measured by them?
Construct

Indicator 1

Indicator 2 Indicator 3

Construct

Indicator 1

Indicator 2 Indicator 3

?
Changes in the latent variable
directly cause changes in the
assigned indicators
Changes in one or more of the
indicators causes changes in
the latent variable
Example Measuring SES
SES
Occupation
Education
Housing
Income
SES
Poverty
Crime Rate
Inflation
Cost of Living
Example Measuring SATISFACTION
Example: Reflective vs. Formative
World View
Drunkenness
Cant walk a straight line
Smells of alcohol
Slurred speech
Drunkenness
Consumption of beer
Consumption of wine
Consumption of hard
liquor
Example: Reflective vs. Formative
World View
View of Formative Measures
1. Composite (formative) constructs indicators
completely determine the latent construct. They share similarities
because they define a composite variable but may or may not have
conceptual unity. In assessing validity, indicators are not
interchangeable and should not be eliminated, because removing an
indicator will likely change the nature of the latent construct.

2. Causal constructs indicators have conceptual unity in that all
variables should correspond to the definition of the concept. In
assessing validity some of the indicators may be interchangeable, and
also can be eliminated.

Bollen, K.A. (2011), Evaluating Effect, Composite, and Causal Indicators in
Structural Equations Models, MIS Quarterly, 35(2), 359-372.

How to Decide
Reflective Measurement Models
Direction of causality is from
construct to measure
Indicators expected to be correlated
Dropping an indicator from the
measurement model does not alter
the meaning of the construct
Takes measurement error into
account at the item level
Similar to factor analysis
Typical for management and social
science researches
Formative Measurement Models
Direction of causality is from
measure to construct
Indicators are not expected to be
correlated
Dropping an indicator from the
measurement model may alter alter
the meaning of the construct
No such thing as internal consistency
reliability
Based on multiple regression
Need to take care of multicollinearity
Typical for success factor research
(Diamantopolous & Winklhofer,
2001)
Comparison between Reflective and Formative
Problems in Specification
Make constructs from measured
variables.
Draw a path diagram for the
measurement model.
Stage 2: Developing the Overall
Measurement Model
Assess the adequacy of the
sample size.
Select the estimation method
and missing data approach.
Stage 3: Designing a Study to
Produce Empirical Results
Missing Data
Options:
Complete Case or
List-Wise Deletion.
All-Available or
Pair-Wise Deletion.
Model-Based
Deletion.
Guidelines in Sample Size and Missing
Data
When a model has scales borrowed
from various sources reporting other
research, a pretest using respondents
similar to those from the population to
be studied is recommended to screen
items for appropriateness.
Guidelines in Sample Size and Missing
Data
Pair-wise deletion of missing cases (all available
approach) is a good alternative for handling
missing data when the amount of missing data is
less than 10 percent and the sample size is
around 250 or more.
o As sample sizes become small or when missing data
exceeds 10 percent, one of the imputation methods for
missing data becomes a good alternative for handling
missing data.
o When the amount of missing data becomes very high
(15 percent or more), SEM may not be appropriate.
Sample Size
The minimum sample size for a particular SEM
model depends on several factors including the
model complexity and the communalities
(average variance extracted among items) in
each factor. The following guidelines are
offered:
o SEM models containing five or fewer constructs,
each with more than three items (observed
variables), and with high Item communalities (.6
or higher), can be adequately estimated with
samples as small as 100-150.
Sample Size
o When the number of factors is larger than six,
some of which have fewer than three
measured items as indicators, and multiple
low communalities are present, sample size
requirements may exceed 500.
The sample size must be sufficient to allow
the model to run, but more important, it
must adequately represent the population.
Generally 100 400 is acceptable
Sample Size Issues
Five considerations affecting sample size for SEM include:
1. Multivariate distribution of the data (15: 1 parameter)
2. Estimation technique (Even 50 valid)
3. model complexity
a) More constructs, more parameters estimated
b) Constructs > 3 measured/indicator
c) Multigroup analysis
4. amount of missing data, and
5. amount of AVE among the reflective indicators.
Stage 4: Assessing Measurement
Model Validity
What is Goodness of Fit (GOF)?




Types of GOF:
o Absolute Fit Measures.
o Incremental Fit Measures.
o Parsimonious Fit Measures.
Evaluate construct validity of the measurement model.
) )( 1 (
k
S N E =
2
_
k p p df + = )] 1 )( [(
2
1
Guidelines for Establishing Acceptable
and Unacceptable Fit
Use multiple indices of differing types.
Adjust the index cutoff values based on
model characteristics.
Use indices to compare models.
The pursuit of better fit at the expense
of testing a true model is not a good
trade-off.
Is the Measurement Model Valid?
No refine measures and
design a new study.
Yes proceed to test the
structural model with stages 5
and 6.
Stage 5: Specifying the Structural
Model
Stage five involves specifying the structural
model by assigning relationships from one
construct to another based on the proposed
theoretical model. That is, the dependence
relationships that exist among the constructs
representing each of the hypotheses are
specified.

The end result is to convert the measurement
model to a structural model.
Stage 5: Specifying the Structural
Model
Stage 6: Assessing the Structural
Model Validity
Assess the goodness of fit (GOF)
of the structural model.

Evaluate the significance,
direction, and size of the structural
parameter estimates.
As models become more complex, the likelihood of
alternative models with equivalent fit increases.

Multiple fit indices should be used to assess a models
goodness of fit. They should include:
o The _
2
value and the associated df
o One absolute fit index (like the GFI, RMSEA or
SRMR)
o One incremental fit index (like the CFI or TLI)
o One goodness of fit index (GFI, CFI, TLI, . . . )
o One badness of fit index (RMSEA, SRMR, . . . )
Assessing Predictive Accuracy
No single magic value for the fit indices that
separates good from poor models.

It is not practical to apply a single set of cut-off
rules that apply for all measurement models and
for that matter to all SEM models of any type.

The quality of fit depends heavily on model
characteristics including sample size and model
complexity.
Assessing Predictive Accuracy
The quality of fit depends heavily on model
characteristics, including sample size and
complexity . . .
Simple models with small samples should be held to
very strict fit standards. Even an insignificant p-value
for a very simple model may not be very meaningful.
More complex models with larger samples should not
be held to the same strict standards. Thus, when
samples are large and the model contains a large
number of measured variables and parameter
estimates, cut-off values of .95 on key GOF
measures are unrealistic.
Assessing Predictive Accuracy
Types of SEM GOF Measures
Absolute (overall) = measures overall goodness-
of-fit for both the structural and measurement
models collectively. This type of measure
does not make any comparison to a
specified null model (incremental fit measure)
or adjust for the number of parameters in
the estimated model (parsimonious fit
measure).
Types of SEM GOF Measures
Incremental (comparative) = measures
goodness-of-fit that compares the current
model to a specified null (independence)
model to determine the degree of improvement
over the null model.
Types of SEM GOF Measures
Parsimonious = measures goodness-of-fit
representing the degree of model fit per estimated
coefficient. This measure attempts to correct for
any overfitting of the model and evaluates the
parsimony of the model compared to the goodness-
of-fit.
Selecting a rigid cutoff for the fit indices is like
selecting a minimum R
2
for a regression
equation. Almost any value can be challenged.
Awareness of the factors affecting the values and
good judgment are the best guides to evaluating the
size of the GOF indices.
Fit Indices Acceptable
Values
Source
Goodness-of-Fit Index
(GFI)
0.90 Chau & Hu (2001)
Root Mean Square Error
Approximation (RMSEA)
0.08 Brown and Cudeck
(1993)
Root Mean Square
Residual (RMR)
0.08

Brown and Cudeck
(1993)
Standardized Root Mean
Residual (SRMR)
0.08

Hu and Bentler (1999)
_
2
/df 3.0 Bagozzi & Yi (1988)
Absolute Fit Measures
Fit Indices Acceptable Values Source
Normed Fit Index (NFI) 0.90 Bentler and Bonnet
(1980)
Non-normed Fit Index
(NNFI) (TLI)
0.90 Bentler and Bonnet
(1980)
Comparative Fit Index
(CFI)
0.90 Bagozzi & Yi (1988)
Relative Fit Index (RFI)

0.90 Anderson and Gerbing
(1988)
Incremental Fit Indices
Fit Indices Acceptable Values Source
Adjusted Goodnes-of-
Fit Index (AGFI)
0.80 Chau & Hu (2001)
Parsimony Normed fit
Index (PNFI)

0.80
Parsimony Fit Indices
Is the Structural Model Valid?
No refine model and test with new data.
Yes draw substantive conclusions and
recommendations.
Modeling Strategy
Confirmatory Modeling Strategy
Focus is on assessing the fit
Competing Models Strategy
Focus on comparing the estimated model with other
alternatives
Model Development Strategy
Basic framework is provided
Improve the framework through modifications
Re-specification
Poor Practices
Pursuit of fit
Reducing number of items per construct
Parceling of items
Separate analysis for each construct
Reducing sample size
Representativeness
Generalizability
Statistical convergence
Less accurate parameters
Lower statistical power
13-99

SEM:
Confirmatory
Factor Analysis
Confirmatory Factor Analysis
Defined
Confirmatory Factor Analysis . . . is similar
to EFA in some respects, but philosophically
it is quite different. With CFA, the
researcher must specify both the number of
factors that exist within a set of variables
and which factor each variable will load
highly on before results can be computed.

Confirmatory Factor Analysis
Defined
So the technique does not assign variables
to factors. Instead the researcher must be
able to make this assignment before any
results can be obtained.
SEM is then applied to test the extent to
which a researchers a-priori pattern of
factor loadings represents the actual data.

Review of and Contrast with Exploratory
Factor Analysis
EFA (exploratory factor analysis) explores the
data and provides the researcher with
information about how many factors are needed
to best represent the data.

With EFA, all measured variables are related to
every factor by a factor loading estimate.
Simple structure results when each measured
variable loads highly on only one factor and has
smaller loadings on other factors (i.e., loadings
< .40).

Review of and Contrast with Exploratory
Factor Analysis
The distinctive feature of EFA is that the
factors are derived from statistical results,
not from theory, and so they can only be
named after the factor analysis is performed.

EFA can be conducted without knowing
how many factors really exist or which
variables belong with which constructs.
In this respect, CFA and EFA are not the
same.

Measurement Model and Construct Validity
One of the biggest advantages of
CFA/SEM is its ability to assess the
construct validity of a proposed
measurement theory.
Construct validity . . . is the extent to
which a set of measured items actually
reflect the theoretical latent construct they
are designed to measure.

Measurement Model and Construct Validity
Construct validity is made up of four
important components:
1. Convergent validity three approaches:
Factor loadings.
Variance extracted.
Reliability.
2. Discriminant validity
3. Nomological validity.
4. Face validity.


Assessing Measurement Model
Elements of the model are separately
evaluated based on certain quality
criteria's:
Reflective measurement models
Formative measurement models
Structural model
Validation of the measurement models is a
requirement for assessing the structural model
Measurement Model
Reliability
Validity
Structural Model
Assessment of effects
Assessment of prediction quality
Effect of Errors
What is the effect of error terms on
measurement:



X
m
value as measured
X
t
true value
error term

r
random term

s
systematic term

X
m
= X
t
+

X
m
= X
t
+
r
+
s


Consequences
Assessment of Reflective Models
Internal Consistency reliability
Composite reliability
Cronbachs alpha
Indicator reliability
Squared loadings
Convergence validity
Average Variance Extracted (AVE)
Discriminant Validity
Fornell-Larcker Criterion
Average Variance Extracted (AVE)
AVE should exceed 0.5 to suggest adequate
convergent validity (Bagozzi & Yi, 1988).

2
i
= squared loadings of indicator i of a latent variable
var(c
i
) = squared measurement error of indicator i

Composite Reliability (CR)
Composite reliability should be 0.7 or higher
to indicate adequate convergence or internal
consistency (Gefen et al., 2000).

i
= loadings of indicator i of a latent variable
c
i
= measurement error of indicator i
j = flow index across all reflective measurement model
Cronbach Alpha (o)
Cronbach alpha values should be 0.7 or
higher to indicate adequate convergence or
internal consistency. (Nunnally, 1978)
N = number of indicators assigned to the factor
o
2
i
= variance of indicator i
o
2
t
= variance of the sum of all assigned indicators scores
j = flow index across all reflective measurement model
Discriminant Validity
Fornell & Larcker (1981) criterion
A latent variable should explain better the variance
of its own indicators than the variance of other latent
variables
The AVE of a latent variable should be higher than
the squared correlations between the latent variable
and all other variables. (Chin, 2010; Chin 1998b;
Fornell & Larcker, 1981).

Discriminant Validity
The square root of the Average Variance Extracted
(AVE) that exceeds the intercorrelations of the construct
with the other constructs in the model to ensure
discriminant validity (Chin, 2010; Chin 1998b; Fornell &
Larcker, 1981).
Example:
Discriminant Validity
Assessment of Formative Measurement
Models
Expert validity (Anderson
& Gerbing, 1991)



Sa = substantive agreement
Sv = substantive validity
Indicator relevance
Indicator significance
Multicollinearity
External Validity
Strong and
Significant
Formative Reflective
Does the measure of a construct correlate highly with a second, different
measure of the construct
Does a construct behave within a nomological net as stated by theory?
Higher order constructs
A number of recent papers have presented second
order construct models.
Higher level of abstraction?
Higher Order Constructs
When using higher order constructs, several
issues need to be considered.
The most important is the purpose of the
model:
Often the initial answer is that a good model
demonstrates that a general, more global factor
exists that explains the first order factors.
Is this second order factor expected to
mediate fully the relationship of the first order
factors when applied in a theoretical model?
There are 4 possible types of second-order constructs.
Confirmatory Factor Analysis Stages
Stage 1: Defining Individual Constructs
Stage 2: Developing the Overall Measurement Model
Stage 3: Designing a Study to Produce Empirical Results
Stage 4: Assessing the Measurement Model Validity
Stage 5: Specifying the Structural Model
Stage 6: Assessing Structural Model Validity

Note: CFA involves stages 1 4 above. SEM is stages 5 and 6.
Stage 1: Defining Individual
Constructs
List constructs that will comprise the
measurement model.

Determine if existing scales/constructs
are available or can be modified to test
your measurement model.

If existing scales/constructs are not
available, then develop new scales.
Defining Individual Constructs
All constructs must display adequate construct validity,
whether they are new scales or scales taken from
previous research. Even previously established scales
should be carefully checked for content validity.

Content validity should be of primary importance and
judged both qualitatively (e.g., experts opinions) and
empirically (e.g., unidimensionality and convergent
validity).

A pre-test should be used to purify measures prior to
confirmatory testing.
Stage 2: Developing the Overall
Measurement Model
Key Issues . . .
Unidimensionality no cross loadings
Congeneric measurement model no
covariance between or within construct error
variances
Items per construct identification
Reflective vs. formative measurement models
Stage 2: A Congeneric Measurement
Model

Compensation
X
1
X
2
X
3
X
4

e
1
e
2
e
3
e
4

L
x1

L
x 2

L
x

3

L
x 4

Teamwork
X
5
X
6
X
7
X
8

e
5
e
6
e
7
e
8

L
x 5

L
6

L
x

7

L
x 8

Each measured variable is related to exactly one construct.
Developing the Overall Measurement
Model
In standard CFA applications testing a
measurement theory, within and between error
covariance terms should be fixed at zero and
not estimated.

In standard CFA applications testing a
measurement theory, all measured variables
should be free to load only on one construct.
Developing the Overall Measurement
Model
Latent constructs should be indicated by at least
three measured variables, preferably four or
more. In other words, latent factors should be
statistically identified.

Formative factors are not latent and are not
validated as are conventional reflective factors.
As such, they present greater difficulties with
statistical identification and should be used
cautiously.
Key Issues . . .

Measurement scales in CFA
SEM/CFA and sampling
Specifying the model:
o Which indicators belong to each
construct?
o Setting the scale to 1 for one indicator
on each construct
Stage 3: Designing a Study to
Produce Empirical Results
Stage 4: Assessing Measurement
Model Validity
Key Issues . . .

Assessing fit GOF and path estimates
(significance and size)
Construct validity
Diagnosing problems
o Standardized residuals
o Modification indices
o Specification searches
Assessing Measurement Model Validity
Loading estimates can be statistically significant
but still be too low to qualify as a good item
(standardized loadings below |.5|). In CFA,
items with low loadings become candidates
for deletion.

Completely standardized loadings above +1.0
or below -1.0 are out of the feasible range and
can be an important indicator of some problem
with the data.
Assessing Measurement Model Validity
Typically, standardized residuals less than
|2.5| do not suggest a problem.
o Standardized residuals greater than |4.0|
suggest a potentially unacceptable degree of
error that may call for the deletion of an
offending item.
o Standardized residuals between |2.5| and
|4.0| deserve some attention, but may not
suggest any changes to the model if no other
problems are associated with those items.
Assessing Measurement Model
Validity
The researcher should use the modification indices only
as a guideline for model improvements of those
relationships that can theoretically be justified.

Specification searches based on purely empirical
grounds are discouraged because they are inconsistent
with the theoretical basis of CFA and SEM.

CFA results suggesting more than minor modification
should be re-evaluated with a new data set (e.g., if more
than 20% of the measured variables are deleted, then
the modifications can not be considered minor).
Attitudes
toward
Coworkers
JS4
JS3
JS5
JS2
JS1
OC1
OC2 OC3
OC4
AC3 AC2 AC4 AC1
SI2
SI3
SI1
SI4
EP2 EP1 EP3
Note: Measured variables are shown as a box with labels corresponding to those shown in the HBAT questionnaire. Latent
constructs are an oval. Each measured variable has an error term, but the error terms are not shown. Two headed
connections indicate covariance between constructs. One headed connectors indicate a causal path from a construct to an
indicator (measured) variable. In CFA all connectors between constructs are two-headed covariances / correlations.
EP4
Organizational
Commitment
Staying
Intentions
Job
Satisfaction
Environmental
Perceptions
Measurement Theory Model CFA
Structural Model
JS
OC
SI
EP
AC
Hypotheses:
H1: EP + JS
H2: EP + OC
H3: AC +JS
H4: AC +OC
H5: JS + OC
H6: JS + SI
H7: OC +SI
Note: observable indicator variables are not shown to simplify the model.
Moderator Variable
A moderator specifies the conditions
under which a given effect occurs, as
well as the conditions under which the
direction or strength of an effect vary.
Baron and Kenny (1986, pp. 1174, 1178)
describe a moderator variable as the
following:

Moderator Variable
A qualitative (e.g., sex, race, class) or quantitative
variable . . . that affects the direction and/or strength
of a relation between an independent or predictor
variable and a dependent or criterion variable . . . a
basic moderator effect can be presented as an
interaction between a focal independent variable and a
factor (the moderator) that specifies the appropriate
conditions for its operation . . .Moderator variables are
typically introduced when there is an unexpectedly
weak or inconsistent relation between a predictor and
a criterion variable.

Moderator Variable
A moderator variable is one that affects the
relationship between two variables, so that the
nature of the impact of the predictor on the
criterion varies according to the level or value
of the moderator (Holmbeck, 1997).

A moderator interacts with the predictor
variable in such a way as to have an impact on
the level of the dependent variable.

Mediator Variable
A mediator specifies how (or the
mechanism by which) a given effect
occurs (Baron & Kenny, 1986; James &
Brett, 1984). Baron and Kenny (1986,
pp. 1173, 1178) describe a mediator
variable as the following:
Mediator Variable
The generative mechanism through
which the focal independent variable is
able to influence the dependent variable
of interest . . . (and) Mediation . . . is best
done in the case of a strong relation
between the predictor and criterion
variable.

Mediator Variable
Shadish and Sweeney (1991) stated that
the independent variable causes the
mediator which then causes the
outcome. Also critical is the prerequisite
that there be a significant association
between the independent variable and
the dependent variable before testing for
a mediated effect.

Mediator Effect
According to McKinnon et al, (1995),
mediation is generally present when:

1. the IV significantly affects the mediator,
2. the IV significantly affects the DV in the absence
of the mediator,
3. the mediator has a significant unique effect on the
DV, and
4. the effect of the IV on the DV shrinks upon the
addition of the mediator to the model.

Mediator Variable
Baron & Kenny (1986) has formulated the steps and
conditions to ascertain whether full or partial mediating
effects are present in a model.

Independent

Mediator

Dependent
a
b
c
Mediator Analysis
Judd and Kenny (1981), a series of regression
models should be estimated. To test for
mediation, one should estimate the three
following regression equations:
1. regressing the mediator on the independent
variable;
2. regressing the dependent variable on the
independent variable;
3. regressing the dependent variable on both the
independent variable and on the mediator.
Mediator Analysis
1) variations in levels of the independent variable
significantly account for variations in the presumed
mediator (i.e., Path c),

2) variations in the mediator significantly account for
variations in the dependent variable (i.e., Path b), and

3) when Paths a and b are controlled, a previously
significant relation between the independent and
dependent variables is no longer significant, with the
strongest demonstration of mediation occurring when
Path c is zero.
Mediator Analysis
Separate coefficients for each equation
should be estimated and tested.

There is no need for hierarchical or
stepwise regression or the computation
of any partial or semipartial correlations.

Anda mungkin juga menyukai