Anda di halaman 1dari 6

Title page

Principle ideas discussed

Potissimus informatio:

Non-Experimental Methods:

1. Consider the purpose of a factor analysis and the


importance of structural validity

Factor Analysis

2. Understand the steps in the principal


components approach to extracting factors

&

3. Consider the emphasis in component analysis


versus that of common factor analysis

Structural Validity

4. Understand ones decision regarding the number


of factors that should be extracted
5. Note the purpose in various approaches to
rotating factors
6. Consider how to obtain factor scores

Principle ideas discussed

Principle ideas discussed

Pew Data Example - How many dimensions?

Factor Analyses
Generally executed to explain the
relationships among a large number of
variables in terms of a much smaller
number of constructs
Recall the essential nature of executing any
statistical procedure is to summarize a
large amount of information so that we
can interpret complexity in simple terms

Principle ideas discussed

Start: Sum the Column Correlations to


obtain a simple sum of each column
Correlations
Q.1

Q.2a

Q.2b

Q.2c

Q.1

.363**

.385**

.284** .103** .049*

Q.2a

.363**

Q.2b

.385**

.281**

Q.2c

.284**

.284** .433**

Q.11b

.103**

.110**

.033

.036

Q.11c

.049*

.088**

.009

-.033

Q.11d

.024

.028

-.034

-.068** .303** .430**

Total

Q1. Generally, how would you say things are these days in your life would you
say that you are very happy, pretty happy, or not too happy?
Q.2a -- Your family life -- Are you satisfied or dissatisfied? Would you say you are
VERY (dis)satisfied or SOMEWHAT (dis)satisfied
Q.2b -- Your personal financial situation -- Are you satisfied or dissatisfied? Would
you say you are VERY (dis)satisfied or SOMEWHAT (dis)satisfied
Q.2c -- Your present housing situation -- Are you satisfied or dissatisfied? Would
you say you are VERY (dis)satisfied or SOMEWHAT (dis)satisfied
Q.11b -- Being able to live comfortably in retirement -- Is this extremely important
for you, very important, somewhat important, or not too important for you?
Q.11c -- Being able to pay for your childrens college education -- Is this extremely
important for you, very important, somewhat important, or not too important
for you?
Q.11d -- Being able to leave an inheritance for your children -- Is this extremely
important for you, very important, somewhat important, or not too important
for you?

Principle ideas discussed

Start: Sum the Column Correlations to


obtain a simple sum of each column
Correlations
Q.1

Q.2a

Q.2b

Q.2c

.024

Q.1

.363**

.385**

.284** .103** .049*

.281** .284** .110** .088**

.028

Q.2a

.363**

.433** .033

. 009

-.034

Q.2b

.385**

.281**

-.033

-.068**

Q.11b

.036

Q.11c Q.11d

.340**

1
.340**

Q.11b

Q.11c Q.11d

.028

.433** .033

. 009

-.034

-.033

-.068**

Q.2c

.284**

.284** .433**

.303**

Q.11b

.103**

.110**

.033

.036

.430**

Q.11c

.049*

.088**

.009

-.033

Q.11d

.024

.028

-.034

-.068** .303** .430**

.024

.281** .284** .110** .088**

.036
1
.340**

.340**
1

.303**
.430**
1

Total

Next:

Principle ideas discussed

Principle ideas discussed

Normalize the column simple sums by dividing

by the square root of their sums of squares

Multiply Column Elements

Correlations
Q.1

Q.2a

Q.2b

Q.2c

Q.1

.363**

.385**

.284** .103** .049*

Q.11b

**
**
sum
of
all
Q.2b[the.385
.281

.284**

.024

.033

.036

.036

4.88 .049
+ 4.62
+ 4.45
+ 3.76
+
*
** .009
Q.11c
.088
-.033

-.068**

-.033
.340**

.303**

3.72
+ .430
2.82**
** 3.53
.340+
1

= 27.78
5.27
**
Q.11d
.024
.028
-.034 &
-.068** sqrt
.303**=.430
1
2.21 / 5.27 = .4189 etc.
.4189
.4086 .3997 .3673 .3652 .3572 .3193
Total

2.21

2.15

2.11

1.94

1.93

1.88

.363**

Q.1
.4189 1

of

the
simple
1 squared
.433** .033
. 009 sums]
-.034

.284** .433**

**
Sum of
Q.11b
.103Squares
.110**

Correlations
ByQ.1Previous
Characteristic
Vector
Q.2a
Q.2b
Q.2c Q.11b Q.11c
Q.11d

Q.11c Q.11d

**
** .284
** the
Q.2a
1
.281
.110**square
.088** root
.028
Divide.363
each
simple
sum
by

Q.2c

Next: Produce another trial vector


(iterative comparison)

1.68

.385**

.284** .103** .049*

.024

Q.2a
.4086 .363**

Q.2b
.3997 .385**

.281**

Q.2c
.3673 .284**

.284** .433**

Q.11b
.3652 .103**

.110**

.033

.036

Q.11c
.3572 .049*

.088**

.009

-.033

Q.11d
.3193 .024

.028

-.034

-.068** .303** .430**

Total

.281** .284** .110** .088**

.028

.433** .033

. 009

-.034

-.033

-.068**

.036

.340**

1
.340**

.303**
.430**

Produces a trial characteristic root

Principle ideas discussed

Next: Produce another trial vector


(iterative comparison)

Next: jumping to 13th iteration to examine for convergence of


characteristic vector ?? convergence not yet acceptable ??
Principle ideas discussed

0.490275

Matrix result of 1st trial vector multiplied across original matrix


0.4189

0.4086 0.148322

0.3997 0.153885 0.112316

0.3673 0.104313 0.104313 0.159041

0.3652 0.037616 0.040172 0.012052 0.013147

0.3572 0.017503 0.031434 0.003215 -0.01179 0.121448

0.3193 0.007663 0.00894 -0.01086 -0.02171 0.096748 0.137299


0.888201 0.857836 0.839244 0.755027 0.697902 0.666626

0.1352 0.003245 0.003786

0.3193

1.008808 0.949898 1.00289

0.56648 sum

0.1926 0.082818

-0.0046 -0.00919 0.040966 0.058136

0.1352

0.9197 0.475567 0.385224 0.267293 sum

0.788901 0.735882 0.704331 0.570066 0.487067 0.444391 0.320899 squares

1.017694 0.902305 1.005789 0.845848 0.226164 0.148398 0.071446 squares

4.051537 SmSqrs

4.217644 SmSqrs

2.012843 RootSS

2.05369 RootSS

divided by
equals

0.4412

0.4262

0.4169

0.3751

0.3467

0.3312

0.2814 2nd trial vector

ideas discussed
Next: jumping to 27thPrinciple
iteration
to examine for convergence of
characteristic vector *** convergence acceptable ***

0.494987 0.464503

0.4912

0.495
0.168614
0.191153
0.129845
0.021877
0.008065
0.002568

0.179685
0.4645
0.139517
0.129845
0.023364
0.014485
0.002996

0.49654 0.457174 0.212405 0.164601 0.107028 26th trial vector

0.190575
0.130525
0.4965
0.197968
0.007009
0.001481
-0.00364

0.14058
0.131918
0.214985
0.4572
0.007646
-0.00543
-0.00728

0.050985
0.051095
0.016385
0.016459
0.2124
0.055964
0.032421

0.024255
0.040876
0.004469
-0.01509
0.072216
0.1646
0.04601

0.01188
0.013006
-0.01688
-0.03109
0.064357
0.070778
0.107

0.494987 0.464503

0.4883

0.4478

0.1876

0.495
0.168614
0.191153
0.129845
0.021877
0.008065
0.002568

0.179685
0.4645
0.139517
0.129845
0.023364
0.014485
0.002996

0.190575
0.130525
0.4965
0.197968
0.007009
0.001481
-0.00364

0.14058
0.131918
0.214985
0.4572
0.007646
-0.00543
-0.00728

0.050985
0.051095
0.016385
0.016459
0.2124
0.055964
0.032421

0.024255
0.040876
0.004469
-0.01509
0.072216
0.1646
0.04601

1.017121 0.954391 1.02042 0.939621 0.435709 0.337338 0.219051 sum


1.034536 0.910862 1.041256 0.882888 0.189842 0.113797 0.047983 squares
4.221165 SmSqrs
1st component's
eigenvalue

0.21207 0.164191 0.106617 27th trial vector

each element of the characteristic vector is then multiplied by the square root of the eigenvalue

0.71

0.666 0.712 0.656 0.304 0.235 0.153

13th trial vector

0.01188
0.013006
-0.01688
-0.03109
0.064357
0.070778
0.107

2.054547 RootSS

1.43 root
0.495059 0.464526 0.496664 0.457337

0.1302

0.49654 0.457174 0.212405 0.164601 0.107028 26th trial vector

4.221165 SmSqrs
2.054547 RootSS

0.2316

previous trial vector multiplied by original matrix


0.495
0.4645
0.4965
0.4572
0.2124
0.1646
0.107

1.017121 0.954391 1.02042 0.939621 0.435709 0.337338 0.219051 sum


1.034536 0.910862 1.041256 0.882888 0.189842 0.113797 0.047983 squares

0.4625

ideas discussed
Next: jumping to 27thPrinciple
iteration
to examine for convergence of
characteristic vector *** convergence acceptable ***

previous trial vector multiplied by original matrix


0.495
0.4645
0.4965
0.4572
0.2124
0.1646
0.107

0.2357 0.080138 0.071417

0.1926 0.009437 0.016949 0.001733 -0.00636 0.065484

0.3572 0.153596

0.135181 12th trial vector

0.4457 0.016045 -0.01471 -0.03031

0.2357 0.024277 0.025927 0.007778 0.008485

0.3652 0.124168 0.110656

0.192577

0.4864 0.210611 0.016051 0.004378 -0.01654

0.4457 0.126579 0.126579 0.192988

0.3673 0.013223 -0.01212 -0.02498

0.235698

0.462 0.129822 0.131208 0.05082 0.040656 0.012936

0.4864 0.187264 0.136678

0.3997 0.17307 0.01319 0.003597 -0.01359

0.445694

0.4903 0.177979 0.188766 0.139245 0.050501 0.024025 0.011767

0.462 0.167706

0.4086 0.114817 0.116042 0.044946 0.035957 0.011441

0.486449

previous trial vector multiplied by original matrix


0.4903

0.4189 0.152061 0.161277 0.118968 0.043147 0.020526 0.010054

0.461998

1st principal
component

1st component's
eigenvalue

1.43 root
0.495059 0.464526 0.496664 0.457337

0.21207 0.164191 0.106617 27th trial vector

each element of the characteristic vector is then multiplied by the square root of the eigenvalue

0.71

0.666 0.712 0.656 0.304 0.235 0.153

1st principal
component

Principle ideas discussed

Next: compute cross products of the first


principal component loadings
0.71

q1

0.666

q2

0.5041

q1

0.712

q3

0.4729

0.656

q4

0.5055

0.304

q5

0.4658

0.2158

In this Example: Difference is in the Matrix Below


0.153 component

q7

0.1669

Principle ideas discussed

Subtract 1st principal components cross-products


of loadings from original correlation matrix

1st principal

0.235

q6

Next:

0.4959

-0.10986

-0.10986

0.556444

-0.12052

-0.19319

-0.12052

-0.18176

-0.11284

-0.11785

-0.08463

-0.19319

-0.1529

-0.09246

-0.06851

-0.0739

0.493056

-0.03407

-0.18345

-0.15832

-0.14294

0.1086

q2

0.4729

0.4436

0.4742

0.4369

0.2025

0.1565

0.1019

q3

0.5055

0.4742

0.5069

0.4671

0.2164

0.1673

0.1089

-0.18176

-0.1529

-0.03407

0.569664

-0.16342

-0.18716

-0.16837

q4

0.46587

0.4369

0.4671

0.4303

0.1994

0.1542

0.1004

-0.11284

-0.09246

-0.18345

-0.16342

0.907584

0.26856

0.256488

q5

0.2158

0.2025

0.2164

0.1994

0.0924

0.0714

0.0465

-0.11785

-0.06851

-0.15832

-0.18716

0.26856

0.944775

0.394045

q6

0.1669

0.1565

0.1673

0.1542

0.0714

0.0552

0.0360

0.1086

0.1019

0.1089

0.1004

0.0465

0.0360

0.0234

-0.08463

-0.0739

-0.14294

-0.16837

0.256488

0.394045

0.976591

q7

subtract from original matrix and start over

Principle ideas discussed

Next: The Matrix is REFLECTED after identifying a split


in the matrix the intersecting splits are reflected

Principle ideas discussed

Next: The Matrix is REFLECTED after identifying a split


in the matrix the intersecting splits are reflected

In this Example: Results of the Reflection shown Below


0.4959

-0.10986

-0.10986

0.556444

-0.12052

-0.19319

-0.18176

-0.1529

-0.11284

-0.12052

-0.18176

-0.11284

-0.11785

-0.08463

0.4959

-0.10986

-0.19319

-0.1529

-0.09246

0.493056

-0.03407

-0.18345

-0.06851

-0.0739

-0.10986

0.556444

-0.15832

-0.14294

-0.12052

-0.19319

-0.03407

0.569664

-0.16342

-0.18716

-0.16837

-0.18176

-0.1529

-0.09246

-0.18345

-0.16342

0.907584

0.26856

0.256488

0.11284

-0.11785

-0.06851

-0.15832

-0.08463

-0.0739

-0.14294

-0.18716

0.26856

0.944775

0.394045

-0.16837

0.256488

0.394045

0.976591

-0.12052

-0.18176

0.11284

0.11785

-0.19319

-0.1529

0.09246

0.06851

0.0739

0.493056

-0.03407

0.18345

0.15832

0.14294

-0.03407

0.569664

0.16342

0.18716

0.16837

0.09246

0.18345

0.16342

0.907584

0.26856

0.256488

0.11785

0.06851

0.15832

0.18716

0.26856

0.944775

0.394045

0.08463

0.0739

0.14294

0.16837

0.256488

0.394045

0.976591

This is Now the Original Matrix for the

2nd

Component

Utilize the characteristic vector to iteratively


seek characteristic vector convergence

Principle ideas discussed

Principle ideas discussed

Begin Again with 2nd component:

Same procedure is followed as with 1st component

0.4959

-0.10986

-0.12052

-0.18176

0.11284

0.11785

0.08463

0.4959

-0.10986

-0.12052

-0.18176

0.11284

0.11785

0.08463

0.1063

0.052714

-0.10986

0.556444

-0.19319

-0.1529

0.09246

0.06851

0.0739

-0.10986

0.556444

-0.19319

-0.1529

0.09246

0.06851

0.0739

0.0893

-0.00981 0.049687 -0.01732 -0.01365

-0.12052

-0.19319

0.493056

-0.03407

0.18345

0.15832

0.14294

-0.12052

-0.19319

0.493056

-0.03407

0.18345

0.15832

0.14294

-0.18176

-0.1529

-0.03407

0.569664

0.16342

0.18716

0.16837

0.08463

-0.18176

-0.1529

-0.03407

0.569664

0.16342

0.18716

-0.01168 -0.01281 -0.01933 0.011991 0.012501 0.008993


0.00826 0.006117 0.006599

0.1676

-0.0202

0.1917

-0.03485

-0.03236 0.082644 -0.00572 0.030755 0.026531

0.5286

0.059626 0.048896 0.096998 0.086373 0.479757 0.141929 0.135586

0.02395

-0.02931 -0.00654 0.109211 0.031324 0.035886 0.032282

0.16837

0.11284

0.09246

0.18345

0.16342

0.907584

0.26856

0.256488

0.11284

0.09246

0.18345

0.16342

0.907584

0.26856

0.256488

0.11785

0.06851

0.15832

0.18716

0.26856

0.944775

0.394045

0.11785

0.06851

0.15832

0.18716

0.26856

0.944775

0.394045

0.5697

0.067168 0.039024 0.090184 0.106648 0.153021 0.538253 0.224462

0.08463

0.0739

0.14294

0.16837

0.256488

0.394045

0.976591

0.08463

0.0739

0.14294

0.16837

0.256488

0.394045

0.976591

0.5585

0.047249 0.041273

1.

Sum the columns to establish a simple sum of inter-item correlations


in each column (i.e. with respect to each item)

2. Square each columns simple sum and then sum those squares to
obtain a Sum of (Column-Sum) Squares

3. Compute the square root of the Sum of (Column-Sum) Squares and


divide each columns simple sum of inter-item correlations by that
computed square root (this is known as the characteristic root)

4. The results of these divisions are identified as the elements in the


2nd components first characteristic vector (the first trial
characteristic vector for the 2nd component)

0.07981 0.094051 0.143255 0.220049 0.545431

5. Multiply the new Original correlation matrix by the characteristic


vector - and - derive a simple sum for each of the resulting columns

6. Square each columns simple sum and then sum those squares to obtain a
Sum of (Column-Sum) Squares

7. Compute the square root of the Sum of (Column-Sum) Squares and


divide each columns simple sum (of inter-item correlations) by that
computed square root (this is known as the characteristic root)

8. The results of these divisions are identified as the elements in the


NEW characteristic vector (the new trial characteristic vector)
9. Return to step 5 and continue until the vector converges

Principle ideas discussed

After establishing a new convergent characteristic vector


0.084738 0.039855 0.19447 0.224246 0.48342 0.577364 0.579726
0.0847

0.042003 -0.00931 -0.01021

-0.0154 0.009554 0.009961 0.007166

0.0399

-0.00439

-0.0061 0.003691 0.002733 0.002949

0.1945

-0.02344 -0.03756 0.095908 -0.00663 0.035691 0.030789 0.027794

0.0222 -0.00774

0.2242

-0.04076 -0.03428 -0.00765 0.127727 0.036634 0.04197 0.037755

0.4834

0.054528 0.044715 0.088704 0.078988 0.438734 0.129793 0.123992

0.5774

0.068075 0.039552 0.091402 0.108089 0.15509 0.545528 0.227496

0.5797

0.049043 0.04284 0.082839 0.097621 0.148693 0.228402 0.566135

factor loadings before re-reflection

8th trial 2nd characteristic

9th trial 2nd


0.08467 0.03978 0.19452 0.22430 0.48334 0.57736 0.57976 characteristic
Sq Rt of
eigenvalue

obtained
loadings =

0.145067 0.06816 0.333265 0.384293 0.828086 0.989176 0.993286 sum

2.935251 smSquares
1.713257 root

1.308915
SqRt of
eigenvalue

9th trial 2nd characterisitc

multiply the converged characteristic vector by the


square root of the eigenvalue to obtain the 2nd components
factor loadings (i.e. before re-reflection)

Conceptualizing Principal Component Analysis


Principle ideas discussed

1. Components are real factors that are derived directly from


the investigations correlation matrix; they are descriptive
2. The first component is calculated to obtain a mathematical
description of all that an investigations K variables hold in
common conceptually like trying to identify a single regression
line that predicts K different dimension scores from one
predictor
3. The amount of variance accounted for by the 1st principal
component is the amount of variance that is common to all the
variables the amount of variance accounted for in any
principal component analysis will always be largest in the first
extracted principal component
4. In subtracting this variance accounted for from the original
correlation matrix, only residuals that are not accounted for
by that first principal component will be left over thus, the
2nd principle component will be orthogonal to the 1st component

1.3089 1.3089 1.3089 1.3089 1.3089 1.3089 1.3089

0.111

are re-reflected

0.021044 0.004646 0.111066 0.147681 0.685727 0.978469 0.986618 squares

0.084673 0.039784 0.194521 0.224306 0.48334 0.577366 0.579765

Principle ideas discussed

multiply the converged characteristic vector by the

square root of the eigenvalue to obtain the 2nd components

0.052

0.255

0.294

0.633

0.756

0.759

-0.111 -0.052 -0.255 -0.294 0.633 0.756 0.759 2nd component

the component loadings need to be re-reflected to account for


the earlier reflection of the residual matrix
This iterative procedure can continue until all variance is
accounted for by the componentsthe number of components
will never exceed the number of variables

Conceptualizing Principal Component Analysis


Principle ideas discussed

5. The loadings of a factor indicate the correlation between the


item and the factor; thus the square of the loading indicates
the amount of variance in the item that is accounted for by the
factor
6. In a factor matrix, with respect to each items row of factor
loadings, the sum of the squared loadings is equivalent to the
amount of variance in that item accounted for by all the
extracted factors this sometimes reported as h2
7. The eigenvalue of an extracted factor, divided by the number
of matrix items, will explain the amount of variance in all the
variables that is accounted for by that factor
8. The sum of the eigenvalues of all the extracted factors,
divided by the number of items, will explain the amount of
variance in the full matrix that is accounted for by all the
extracted factors

Principle ideas discussed

1. Eigenvalue > 1 (e.g. SPSS default criterion)

a. recall variance accounted by the factor = eigenvalue/#items

2. Substantive expectations about the # of factors


3. Cattells Scree Plot

(where does mountain meet plane?)

Why Rotate the Factors?


As was noted in computing the principal components:
1. The first principal component seeks to explain that which
all the items hold in common (a vector of communality) and
as such accounts for the most variance of any of the
factors
2. The other components are bipolar composites that are
each orthogonal to all the other vectors
3. These conditions make interpretability no better than
simply trying to explain the results without reducing the
size of the dimensions in the model
4. If all one cares about is knowing how many factors are
required to account for most of the variance, principal
component analysis does the work finebut
? Interpretability ?

Principle ideas discussed

Principle ideas discussed

Rotation of Factors

Rotation of Factors

Rotation of factors provides a procedure that


maximizes interpretability of the factors

Rotation of factors provides a procedure that


maximizes interpretability of the factors

If one considers different variables as occupying vectors in


psychometric space (a Cartesian/Euclidean perspective),
then one begins to see that the axes in such space can be
ROTATED in an infinite number of ways around the variable
vectors while the vectors remain fixed in location

If one considers different variables as occupying vectors in


psychometric space (a Cartesian/Euclidean perspective),
then one begins to see that the axes in such space can be
ROTATED in an infinite number of ways around the variable
vectors while the vectors remain fixed in location

Conceptual axes

Conceptual axes

Variable vectors

Variable vectors

Principle ideas discussed

Principle ideas discussed

Rotation of Factors

Rotation of Factors

Rotation of factors provides a procedure that


maximizes interpretability of the factors

Orthogonal Rotation
Uncorrelated, Independent Dimensions

If one considers different variables as occupying vectors in


psychometric space (a Cartesian/Euclidean perspective),
then one begins to see that the axes in such space can be
ROTATED in an infinite number of ways around the variable
vectors while the vectors remain fixed in location

Varimax Rotation

Conceptual axes
Variable vectors

Principle ideas discussed

Rotation of Factors
Oblique Rotation
Correlated, Dependent Dimensions

Oblimin Rotation
Non-Perpendicular Factors
conceptual axes
ARE related

Perpendicular Factors
conceptual axes
are NOT related

Principle ideas discussed

Recall Pew Data Example


Q1. Generally, how would you say things are these days in your life would you
say that you are very happy, pretty happy, or not too happy?
Q.2a -- Your family life -- Are you satisfied or dissatisfied? Would you say you are
VERY (dis)satisfied or SOMEWHAT (dis)satisfied
Q.2b -- Your personal financial situation -- Are you satisfied or dissatisfied? Would
you say you are VERY (dis)satisfied or SOMEWHAT (dis)satisfied
Q.2c -- Your present housing situation -- Are you satisfied or dissatisfied? Would
you say you are VERY (dis)satisfied or SOMEWHAT (dis)satisfied
Q.11b -- Being able to live comfortably in retirement -- Is this extremely important
for you, very important, somewhat important, or not too important for you?
Q.11c -- Being able to pay for your childrens college education -- Is this extremely
important for you, very important, somewhat important, or not too important
for you?
Q.11d -- Being able to leave an inheritance for your children -- Is this extremely
important for you, very important, somewhat important, or not too important
for you?

Principal Component Varimax


Principle ideas discussed

Principle ideas discussed

Rotation to Simple Structure


PEW Data Example:
Happy at present
Satisfaction with family
Satisfaction with finance
Satisfaction with housing
Retire comfort important
Child education important
Child inheritance important

Factor Techniques (Inter- vs. Intra-Individual)

PresHap

FutHop

0.724
0.657
0.750
0.713
0.000
0.000
0.000

0.000
0.000
0.000
0.000
0.693
0.786
0.772

Participant
Participant
Participant
Participant

1
2
3
4

Var 1 Var 2 Var 3 Var 4 Var k

Participant i

simple structure = replicable, interpretive parsimony


1.
2.
3.
4.
5.

Each row should contain at least one zero


Minimum # zero loadings each factor = # factors
Every factor pair contains both zero and sig. loadings
Large # zero loadings on each factor
Only a few simultaneous sig. loadings across factors

Factor Techniques (Inter- vs. Intra-Individual)


Principle ideas discussed

Participant
Participant
Participant
Participant

1.t1
1.t2
1.t3
1.t4

Factor Techniques (Inter- vs. Intra-Individual)


Principle ideas discussed

Var 1 Var 2 Var 3 Var 4 Var k

Variable
Variable
Variable
Variable

Participant 1.ti

1
2
3
4

Prt 1

Prt 2 Prt 3

Prt 4 Prt i

Variable k

Factor Techniques (Inter- vs. Intra-Individual)


Principle ideas discussed

Variable
Variable
Variable
Variable

1
2
3
4

Prt 1.t1 Prt 1.t2 Prt 1.t3 Prt 1.t4 Prt 1.ti

Variable k

Anda mungkin juga menyukai