Anda di halaman 1dari 4

Assignments

1. Discuss the situations under which Factor Analysis can be used ? List the steps involved in using Factor Analysis. When someone says casually that a set of variables seems to reflect "just one factor", there are several things they might mean that have nothing to do with factor analysis. If we word statements more carefully, it turns out that the phrase "just one factor differentiates these variables" can mean several different things, none of which corresponds to the factor analytic conclusion that "just one factor underlies these variables". One possible meaning of the phrase about "differentiating" is that a set of variables all correlate highly with each other but differ in their means. A rather similar meaning can arise in a different case. Consider several tests A, B, C, D which test the same broadly-conceived mental ability, but which increase in difficulty in the order listed. Then the highest correlations among the tests may be between adjacent items in this list (rAB, rBC and rCD) while the lowest correlation is between items at the opposite ends of the list (rAD). Someone who observed this pattern in the correlations among the items might well say the tests "can be put in a simple order" or "differ in just one factor", but that conclusion has nothing to do with factor analysis. This set of tests would not contain just one common factor. A third case of this sort may arise if variable A affects B, which affects C, which affects D, and those are the only effects linking these variables. Once again, the highest correlations would be rAB, rBC and rCD while the lowest correlation would be rAD. Someone might use the same phrases just quoted to describe this pattern of correlations; again it has nothing to do with factor analysis. A fourth case is in a way a special case of all the previous cases: a perfect Guttman scale. A set of dichotomous items fits a Guttman scale if the items can be arranged so that a negative response to any item implies a negative response to all subsequent items while a positive response to any item implies a positive response to all previous items. For a trivial example consider the items

Are you above 5 feet 2 inches in height? Are you above 5 feet 4 inches in height? Are you above 5 feet 6 inches in height? Etc.

To be consistent, a person answering negatively to any of these items must answer negatively to all later items, and a positive answer implies that all previous answers must be positive. For a nontrivial example consider the following questionnaire items:

Should our nation lower tariff barriers with nation B? Should our two central banks issue a single currency? Should our armies become one? Should we fuse with nation B, becoming one nation?

If it turned out that these items formed a perfect Guttman scale, it would be easier to describe peoples' attitutes about "nation B" than if they didn't. When a set of items does form a Guttman scale, interestingly it does not imply that factor analysis would discover a single common factor. A Guttman scale implies that one factor differentiates a set of items (e.g, "favorableness toward cooperation with nation B"), not that one factor underliesthose items. Applying multidimensional scaling to a correlation matrix could discover all these simple patterns of differences among variables. Thus multidimensional scaling seeks factors which differentiate variables while factor analysis looks for the factors which underlie the variables. Scaling may sometimes find simplicity where factor analysis finds none, and factor analysis may find simplicity where scaling finds none. The steps involved in using Factor Analysis. Exploratory factor analysis (EFA) is used to uncover the underlying structure of a relatively large set of variables. The researcher's a priori assumption is that any indicator may be associated with any factor. This is the most common form of factor analysis. There is no prior theory and one uses factor loadings to intuit the factor structure of the data.

Confirmatory factor analysis (CFA) seeks to determine if the number of factors and the loadings of measured (indicator) variables on them conform to what is expected on the basis of preestablished theory. Indicator variables are selected on the basis of prior theory and factor analysis is used to see if they load as predicted on the expected number of factors. The researcher's a priori assumption is that each factor (the number and labels of which may be specified a priori) is associated with a specified subset of indicator variables. A minimum requirement of confirmatory factor analysis is that one hypothesizes beforehand the number of factors in the model, but usually also the researcher will posit expectations about which variables will load on which factors. The researcher seeks to determine, for instance, if measures created to represent a latent variable really belong together.

2.

Write ,short notes on any three of the following :

(a) Conjoint Analysis. Conjoint analysis is a statistical technique used in market research to determine how people value different features that make up an individual product or service. The objective of conjoint analysis is to determine what combination of a limited number of attributes is most influential on respondent choice or decision making. A controlled set of potential products or services is shown to respondents and by analyzing how they make preferences between these products, the implicit valuation of the individual elements making up the product or service can be determined. These implicit valuations (utilities or part-worths) can be used to create market models that estimate market share, revenue and even profitability of new designs. Conjoint originated in mathematical psychology and was developed by marketing professor Paul Green at the University of Pennsylvania and Data Chan. Other prominent conjoint analysis pioneers include professor V. Seenu Srinivasan of Stanford University who developed a linear programming (LINMAP) procedure for rank ordered data as well as a self-explicated approach, Richard Johnson (founder of Sawtooth Software) who developed the Adaptive Conjoint Analysis technique in the 1980s and Jordan Louviere (University of Iowa) who invented and developed Choice-based approaches to conjoint analysis and related techniques such as MaxDiff.

(b) Data Editing.

Data editing was one of the original methods of controlling noise in seismic data. When a seismic trace was dominated by noise, it was simply removed. For prestack data that would only be stacked, removing an offending trace would not significantly affect the stack as long as the stack fold was significant. Even then, only traces completely overwhelmed by high-amplitude noise needed attention, since moderately bad traces generally would not affect the stacked result. In modern data processing, manual editing of traces is no longer practical since the volume of data is so large that a processor cannot examine all the data in a reasonable time. Automatic editing is especially important for three-dimensional surveys because of the huge data volumes involved. These large data volumes have led to other efforts to edit data automatically