Anda di halaman 1dari 18

Educational and Psychological Measurement

http://epm.sagepub.com A Confirmatory Factor Analysis of the Student Adaptation to College Questionnaire


Melinda A. Taylor and Dena A. Pastor Educational and Psychological Measurement 2007; 67; 1002 originally published online Jun 6, 2007; DOI: 10.1177/0013164406299125 The online version of this article can be found at: http://epm.sagepub.com/cgi/content/abstract/67/6/1002

Published by:
http://www.sagepublications.com

Additional services and information for Educational and Psychological Measurement can be found at: Email Alerts: http://epm.sagepub.com/cgi/alerts Subscriptions: http://epm.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations http://epm.sagepub.com/cgi/content/refs/67/6/1002

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

A Conrmatory Factor Analysis of the Student Adaptation to College Questionnaire


Melinda A. Taylor Dena A. Pastor
James Madison University

Educational and Psychological Measurement Volume 67 Number 6 December 2007 1002-1018 2007 Sage Publications 10.1177/0013164406299125 http://epm.sagepub.com hosted at http://online.sagepub.com

The construct validity of scores on the Student Adaptation to College Questionnaire (SACQ) was examined using conrmatory factor analysis (CFA). The purpose of this study was to test the t of the SACQ authors proposed four-factor model using a sample of university students. Results indicated that the hypothesized model did not t. Additional CFAs specifying one-factor models for each subscale were performed to diagnose areas of mist, and results also indicated lack of t. Exploratory factor analyses were then conducted and a four-factor model, different from the model proposed by the authors, was examined to provide information for future instrument revisions. It was concluded that researchers need to return to the rst stage of instrument development, which would entail examining not only the theories behind adjustment to college in greater detail, but also how the current conceptualization of the SACQ relates to such theories. Keywords: adjustment to college; validity; factor analysis; SACQ

or the past 35 years, there have been annual increases in the percentage of students attending college. From 1970 to 2000, college attendance at both 2- and 4-year institutions increased 44% (National Center for Education Statistics, 2003). It is projected that college attendance will continue to grow by 12% between now and 2012 to include 17.6 million people enrolled in college courses (National Center for Education Statistics, 2003). With enlarged attendance comes an increased proportion of students who might face difculties adjusting to the college environment. It is imperative that students facing obstacles be identied so they can be provided with support services. There are a variety of ways in which to go about identifying students who are having trouble adjusting to college. For instance, adjustment may be measured by

Authors Note: Results from this study were presented at the May 2005 Annual Forum of the Association of Institutional Research in San Diego, CA. Please address correspondence to Melinda A. Taylor, North Carolina Department of Public Instruction, 301 N. Wilmington St., Raleigh, NC 27601; email: mtaylor@dpi.state.nc.us. 1002
Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

Taylor, Pastor / Conrmatory Analysis of the SACQ

1003

acquiring students self-reports of their attachment to a university, participation in campus activities, psychological well-being, and academic standing. Most researchers who study adjustment would advocate that all such indicators be used simultaneously so a more comprehensive picture of a students adjustment can be obtained (Spady, 1971; Terenzini & Pascarella, 1977; Tinto, 1996). In fact, the Student Adaptation to College Questionnaire (SACQ) is a self-report instrument created with the intention of capturing such a multifaceted view of adjustment (Baker & Siryk, 1999). In the paragraphs that follow, we describe (a) how the original and revised versions of the SACQ were created, (b) the results of internal domain studies used to assess the dimensionality of the SACQ scores, and (c) the purpose of this article, which is to assess the dimensionality of the SACQ item scores using conrmatory factor analytic techniques.

Development of the SACQ


The creation of the SACQ began in 1980 after Baker and Nisenbaum (1979) implemented an unsuccessful 2-year intervention program designed to aid students in the transition from high school to college. Baker and Siryk (1980) attributed the failure of the program to the voluntary participation of students, where students in need of the intervention were precisely the ones who chose not to participate. The authors determined that a better approach would be, rst, to identify students in need of intervention or counseling services through the use of a measurement tool and, second, to implement the services only for such students. To identify students at risk for transition difculties, the authors created a 52-item self-report measure of adjustment. The only information provided for how this 52-item measure of adjustment was created is as follows: To measure success of transition, [a] selfrating scale was devised that consisted of 52 statements related to various aspects of students adjustment to the college situation (Baker & Siryk, 1980, p. 439). In 1984, Baker and Siryk provided brief descriptions for the subscales to which each of the 52 items uniquely belonged: Academic Adjustment (18 items), Social Adjustment (14 items), Personal-Emotional Adjustment (10 items), and General Adjustment (10 items). In 1985, Baker, McNeil, and Siryk used a revised version of the instrument when studying freshmens adjustment to college. The instrument used in this study was described as an expanded version of the original instrument (Baker et al., 1985, p. 95). This expanded version was comprised of 67 items with 24, 20, and 15 items belonging to the Academic, Social, and Personal-Emotional subscales, respectively. It is unclear whether any changes in the original items were made (either in wording or to which subscale they were assigned) or how new items were developed. Another difference between the revised and original version was the removal of the General Adjustment subscale and the addition of an Institutional

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

1004

Educational and Psychological Measurement

Attachment subscale. Unlike the other three subscales, this subscale was created to share eight items with the Social subscale and one with the Academic subscale. In this version, there were also two items that were not associated with any of the four subscales but instead were used along with the other items to create an overall adjustment score. The constructs measured by the four subscales of the 67-item SACQ are described in Table 1. This version represents the nal version published for commercialization. In the manual, created in 1989 and revised in 1999, the authors state that the SACQ can be used for research purposes or as a diagnostic tool to identify students with poor adjustment to college. The extent to which the SACQ is used as a diagnostic tool is unknown, but there has been an increase in the number of authors using the SACQ for research purposes, especially during the past 5 years (e.g., Hook, 2004; Meehan & Negy, 2003). Because of its widespread use and the implications involved with its use, it is necessary to continue providing validity evidence for the SACQ scores. Based on the information provided by the authors, it appears that little theory was used in the creation and revision of the SACQ. Item creation occurred prior to any investigation of the adjustment literature, and no information was provided regarding how and why new items were developed for the commercially available 67-item version. The lack of a strong theoretical basis for the instrument is perhaps most evidenced by how the authors went about creating the Institutional Attachment subscale (Baker et al., 1985). Instead of dening institutional attachment prior to item development and citing how such a facet was important to the representation of the adjustment construct, the subscale was developed by combining items from the other subscales that were most related to attrition at one particular university. Also, the existence of two items contributing only to the full-scale score and not to any subscale is another example of how theory was not utilized during instrument development because the authors never state why these two items are necessary to represent the construct of overall adjustment.

Dimensionality of the SACQ


After an instrument has been developed, Benson (1998) suggest pursuing the structural validation of the instrument through the use of internal domain studies. Analyses typical in this stage include computation of reliability coefcients as well as item and factor analysis. There have been several studies of the reliability of the SACQ scores (e.g., Baker et al., 1985; Baker & Siryk, 1984, 1986, 1999; Krotseng, 1992; Mooney, Sherman, & LoPresto, 1991); however, there is only one study that attempts to examine the scales dimensionality. In the SACQ manual, Baker and Siryk submitted the intercorrelations among the subscales to a principal components analysis (PCA) to determine the legitimacy of the four-subscale structure versus the existence of a single overall adjustment construct.

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

Table 1 Subscale Descriptions and Means, Standard Deviations, and Internal Consistency (With Condence Intervals) of Subscale Scores
Items 3, 5, 6, 10, 13, 17, 19, 21, 23, 25, 27, 29, 32, 36, 39, 41, 43, 44, 50, 52, 54, 58, 62, 66 24 24-216 147.23 24.33 Total Number of Items M SD Possible Range a (CI) .88 (.87, .89)

Subscale

Description

Academic Adjustment

Social Adjustment 1, 4, 8, 9, 14, 16, 18, 22, 26, 30, 33, 37, 42, 46, 48, 51, 56, 57, 63, 65 20 20-180

136.66

22.51

.89 (.88, .90)

Personal-Emotional Adjustment 2, 7, 11, 12, 20, 24, 28, 31, 35, 38, 40, 45, 49, 55, 64 15

15-135

87.63

19.55

.85 (.84, .86)

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

Institutional Attachment

Students success in coping with various educational demands of college (e.g., motivation to do well academically, academic performance) Students success in coping with interpersonalsocietal demands of college (e.g., social activities, relationships with others) Students intrapsychic state during adjustment to college and the degree to which he or she is experiencing psychological distress and/or any somatic problems Students degree of commitment to educational and institutional goals as well as the degree of attachment to the particular institution he or she is attending 1, 4, 15, 16, 26, 34, 36, 42, 47, 56, 57, 59, 60, 61, 65 15 15-135 107.81 17.47

.88 (.87, .89)

Note: N = 861. Italicized items are shared between the Institutional Attachment and Social Adjustment subscales, and items in bold are shared between the Institutional Attachment and Academic Adjustment subscales. The full-scale score based on all 67 items includes all items shown in the table as well as Items 53 and 67. The 95% condence interval around a was computed in the manner proposed by Fan and Thompson (2001).

1005

1006

Educational and Psychological Measurement

One disadvantage of this approach is that PCA is a data reduction method attempting to explain as much total variance as possible in a set of observed variables using a smaller number of components. Essentially, components are created that are simply transformations of the observed variables. Principle axis factor analysis (FA) would have been a more appropriate statistical technique because it is suitable in situations in which latent constructs or factors are thought to cause variable responses. Also, FA is advantageous over PCA in that it analyzes only the common variance that a variable shares with other variables. PCA analyzes total variance, which includes not only common variance but also specic variance that is unique to the variable as well as error. If the variables submitted to a PCA include a large amount of measurement error, the results from a PCA will likely look very different from the results using the same data and FA techniques. In fact, the results from a PCA and FA will look similar only when either the observed variables are measured with little measurement error or a large number of observed variables are used as input into the analysis (Benson & Nasser, 1998; Gorsuch, 1983, 1990). Another major problem with the authors use of PCA is that only intercorrelations among the subscale scores were submitted to the PCA. It would be more appropriate to submit the intercorrelations among actual items to this sort of analysis so as to observe the strength of an association between each item and each component. By analyzing the items as opposed to the subscales, one can observe whether the items assigned to a given subscale are related to the same component. Also, one can examine if the item is related to more than one component, which may be desirable because several of the SACQ items serve as indicators for multiple subscales. Even if one is content with the authors use of PCA, there are still problems with their interpretation of the PCA results. The estimation technique used (maximum likelihood) for the PCA gave a signicance test for model t that Baker and Siryk (1999) reported as rejecting a one-component model. Despite this nding, they continued to report pattern coefcients from that solution. Additionally, the authors concluded that the lack of t for the one-component solution implied evidence in favor of the four-component solution. It could be, however, that a two-component solution or a different four-component solution would best represent the data, but these solutions were not examined by the authors. Also puzzling is the authors use of exploratory techniques when they clearly have in mind particular structures (one-factor, four-factor) for the dimensionality of the data. Conrmatory factor analysis (CFA) would have provided them with a more rigorous test of the dimensionality of their scale. According to Benson (1998), external domain studies should be pursued only after extensive study of the internal domain. External domain studies are used to examine if the constructs measured by an instrument relate to external variables in ways anticipated by theory. A number of external domain studies do exist for the SACQ (e.g., Conti, 2000; Hertel, 2002; Mooney et al., 1991; Schwitzer, Robbins,

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

Taylor, Pastor / Conrmatory Analysis of the SACQ

1007

& McGovern, 1993), but perhaps these studies were executed prematurely because internal domain studies, particularly those investigating the dimensionality of the instrument, are lacking.

Purpose of the Present Study


The purpose of this study, therefore, is to examine the t of Baker and Siryks (1999) proposed structure of adjustment to college. Specically, CFA is used to test the t of the proposed four-factor structure. In addition to examining the t of the proposed four-factor model, we are also interested in the plausibility of calculating a full-scale score from the SACQ items. Calculation of a full-scale score, in addition to four subscales, implies that there may be a higher order factor structure underlying responses with a single second-order factor giving rise to four rst-order factors. The notion of a full-scale score also implies that a more parsimonious, single rst-order factor model may be used to describe item responses. Finally, because of the large number of items that are shared by the Social Adjustment and Institutional Attachment subscales, a three-factor model will also be tested that combines the two subscales into a single factor.

Method
Participants and Procedure
The sample used in this study consisted of 878 sophomores at a midsized southeastern university. The SACQ was administered to these students during a large-scale assessment performed annually in February to examine student learning outcomes for students with 45 to 70 credit hours. Because the sample used in this study was randomly drawn from the sophomore class, its demographic makeup mirrors that of the university population, which is 61% female and 85% Caucasian, with all other ethnicities representing less than 5% of the student population.

Materials
Participants responded to each of the 67 items using a 9-point scale ranging from 1 (applies very closely to me) to 9 (doesnt apply to me at all). See Table 1 for information regarding item assignment to subscales. To allow comparisons of nested models, we used only the 65 items that contribute to the subscale scores.

Model Comparisons
Model comparisons for nested models were used to determine the best tting and most parsimonious models. The order of models in the present study from most

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

1008

Educational and Psychological Measurement

to least complex is the four-factor model, higher order factor model, three-factor model, and one-factor model. The latter three models are all nested within the fourfactor model. It should be noted that the higher order model should be tested only if acceptable t is found for the four-factor model and the intercorrelations among the factors imply the existence of a second-order factor. Models that are nested can be compared using a w2 difference test, which gives information regarding whether the simpler model ts statistically signicantly worse than the more complex model. It is important to note, however, that the effects of sample size on the w2 tests of model t are the same for the w2 difference test for nested models (Cheung & Rensvold, 2002; Kelloway, 1995). Specically, large sample sizes tend to result in more Type I errors for the w2 and w2 difference tests; therefore, additional t indices were examined for all models tested.

Assessing Model Fit


The models tested in this study were estimated using maximum likelihood estimation (MLE). Several authors advocate the use of MLE over other methods because of its sensitivity to model misspecication. Less sensitivity to model misspecication can lead to higher Type II error rates (Olsson, Foss, Troye, & Howell, 2000; Olsson, Troye, & Howell, 1999). Model t was assessed by a number of indices. First, model t was determined using the minimum t function w2 . As this index is extremely sensitive to sample size (Hu & Bentler, 1995), it was supplemented with additional t indices. Two absolute t indices are reported here: the standardized root mean square residual (SRMR) and the root mean square error of approximation (RMSEA). The SRMR is recommended by Hu and Bentler (1998) as an index to report because of its sensitivity to simple model misspecication with acceptable model t indicated by values less than .08. The second absolute t index reported was the RMSEA. The RMSEA is particularly useful as an absolute t index in detecting complex model misspecication, which is a likely source of misspecication in this study due to the large number of items that are assigned to multiple factors. Hu and Bentler (1998) recommend that this value not exceed .06. Finally, the comparative t index (CFI), an incremental t index, was also consulted. This t index compares model t of the proposed model to that of an independence model and is particularly sensitive to complex model misspecication. Hu and Bentler (1998, 1999) suggest that CFI values indicating adequate model t should exceed .95. Currently, there are disagreements about the strict application of cutoffs (see Marsh, Hau, & Grayson, 2005; Marsh, Hau, & Wen, 2004), but in order to aid in decision making about model t, we chose to use cutoffs in conjunction with examination of standardized residuals.

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

Taylor, Pastor / Conrmatory Analysis of the SACQ

1009

Results
Data Screening
The data were examined for out-of-range responses (i.e., responses greater than 9), and none were detected. Following recommendations by the authors of the instrument, any student with three or more missing responses per subscale was excluded from the data set. This exclusion, along with the deletion of a record due to the presence of a response set, resulted in a valid sample of 865 students. Missing data for the remaining students were handled using Baker and Siryks (1999) recommendation to impute values for students with two or fewer missing responses using the mean response for that student on the subscale for which the response was missing. The data were screened with respect to multicollinearity and outliers. Bivariate correlations, tolerance, and variance ination values (Tabachnick & Fidell, 2001) indicated that neither bivariate nor multivariate multicollinearity was present. Although no univariate outliers were identied after inspection of the items boxplots and histograms, four multivariate outliers were detected using Mahalanobis distance and were excluded from further analyses, resulting in a valid sample of 861. Means and standard deviations for each subscale and measures of internal consistency (Cronbachs coefcient a) using the nal sample of 861 students are presented in Table 1. Means and standard deviations are somewhat higher for this sample than those presented in the SACQ manual. Values for the internal consistency of the scores for this sample are similar to those reported in the manual. Because MLE assumes multivariate normality of the observed variables, the data were examined with respect to univariate and multivariate normality. No items showed skew or kurtosis that exceeded the cutoffs of |3| or |8| (Kline, 1998), respectively, indicating no problems with univariate nonnormality. Finally, multivariate normality was examined using Mardias normalized multivariate kurtosis value. Bentler and Wu (1993) suggest that this value should not exceed three. The Mardias value for this study was greater than 100, suggesting extreme multivariate kurtosis. In situations like this, it is appropriate to use the Satorra-Bentler scaled w2 and robust standard error corrections (West, Finch, & Curran, 1995). Unfortunately, the number of observed variables for this study was so large that an asymptotic covariance matrix (needed for Satorra-Bentler corrections) could not be produced with the sample size available. Thus, MLE without the Satorra-Bentler corrections was used.

Model Fit
The covariance matrices submitted for analysis were produced using PRELIS, reskog & So rbom, and the various models were tested using LISREL 8.54 (Jo 2003). All models were identied by xing the variance of the latent variables to one. Table 2 shows results for the various models tested. The results indicate that

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

1010

Educational and Psychological Measurement

Table 2 Model Fit of Originally Hypothesized Models


Model Four-factor Three-factor One-factor w2 (df) 10,932.53 (2000) 12,010.36 (2011) 14,927.83 (2015) p < .001 < .001 < .001 SRMR .085 .087 .094 RMSEA (90% CI) .089 (.087, .090) .094 (.093, .096) .120 (.120, .120) CFI .91 .90 .87

Note: SRMR = standardized root mean square residual; RMSEA = root mean square error of approximation; CI = condence interval; CFI = comparative t index.

the four-factor model did not t the data. The t of the alternative models, although shown in Table 2, are not of interest because the four-factor model, a more complex model, did not t. Also, the higher order factor model was not t to the data because t of the rst-order factor structure was lacking. Table 2 shows that the RMSEA and CFI, which are both particularly sensitive to complex model misspecication, were the most indicative of model mist for the four-factor model. When models do not t, rather than interpreting the factor solution the focus shifts to diagnosing model mist. Because the most complex model, the four-factor model, did not t, it is this model that we used to investigate areas of mist. Standardized residuals can be examined to diagnose mist with values exceeding |3|, indicative of model mist (Byrne, 1998). Out of 1,007 possible standardized residuals, 642 were greater than |3|, meaning there are large differences between the observed covariance matrix and the model-implied covariance matrix. Often, a pattern of large standardized residuals can be discerned for a set of items. In this case, however, the large number of residuals exceeding the cutoff prohibited our ability to uncover any pattern that would aid in diagnosing mist.

Additional Analyses
One-factor models for each subscale. Because results for the four-factor model were not useful in determining the areas of mist, we decided to test four onefactor models corresponding to each subscale separately. The belief was that analyses of the one-factor models separately would reveal whether the problems with model t arose from cross-loadings, highly related items, and/or from certain problematic subscales. Only the one-factor model for Personal-Emotional Adjustment showed tolerable t (SRMR = .049), although the RMSEA and the lower bound of the 90% condence interval for the RMSEA exceeded the .06 cutoff recommended (Hu & Bentler, 1998). Because this index is sensitive to complex model misspecication, it is likely that it exceeds the cutoff because of the sizable percentage (17%) of standardized residuals exceeding |3|. Because the residuals are associated with a large number of

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

Taylor, Pastor / Conrmatory Analysis of the SACQ

1011

items, it was concluded that the one-factor model did not do a satisfactory job of explaining the relationships among the items on the Personal-Emotional Adjustment subscale. For the remaining three one-factor models, the standardized residuals were examined in an effort to determine sources of mist. However, the large number of standardized residuals exceeding |3| made it difcult to simply describe the results. For instance, 31%, 45%, and 42% of the standardized residuals for the Social Adjustment, Institutional Attachment, and Academic Adjustment subscales, respectively, exceeded a value of |3|. Exploratory factor analyses (EFAs). Because examination of the individual onefactor CFA results did not adequately reveal the causes of mist for the various models, an EFA was also used to determine if there were plausible models that could explain the relationships among the items. The data were submitted to an EFA using SPSS 12.0 (SPSS, 2003). The eigenvalues (and percentage of variance explained) associated with the rst ve factors prior to rotation were 14.77 (22.73%), 4.79 (7.37%), 3.62 (5.56%), 2.74 (4.22%), and 1.85 (2.84%). Although the sizable drop in the percentage of total variance from the rst to the second factor favors retention of a one-factor solution, inspection of the scree plot favors a four-factor solution, supporting the original number of factors proposed by Baker and Siryk (1999). Parallel analysis, an additional method to determine the number of factors to retain (Henson & Roberts, 2006; Thompson & Daniel, 1996), indicated retention of a seven-factor solution. Using the results from the scree plot, percentage of variance explained, and parallel analysis, we decided to rotate and interpret models specifying between one and seven factors. Solutions specifying two or more factors were rotated for interpretability using direct oblimin rotation (d 0). Because these analyses were exploratory in nature, we felt that the factors should be allowed to correlate. Results of the analysis supported this judgment with interfactor correlations ranging from .35 to .26 for the four factors. Because simple structure was not achieved with the six- and sevenfactor solutions, we focused on interpreting the more parsimonious solutions specifying one to ve factors. Even though the four-factor solution explained less than half of the total variance, it emerged as being the most informative and is interpreted below. When looking at the four-factor solution, our goal was to determine if the sets of items proposed by the authors to belong to a given subscale actually loaded on the same factor. Contrary to what the authors proposed, three factors contained items from more than one subscale (see Table 3). Each of these factors did, however, contain a large number of items from a particular subscale. Factors I, II, and IV were dened by a large number of Social Adjustment, Personal-Emotional Adjustment, and Institutional Attachment items, respectively. Only Factor III consisted of

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

1012

Educational and Psychological Measurement

Table 3 Exploratory Factor Analysis Pattern and (Structure) Coefcients and Communalities for the Four-Factor Model
Item No. 1 4 8 9 18 26 30 37 46 63 65 2 7 10 11 12 20 21 22 28 31 38 39 40 41 42 45 48 51 56 64 3 13 17 19 25 29 36 43 50 52 I .63 (.73) .75 (.75) .55 (.54) .52 (.66) .62 (.67) .39 (.44) .49 (.55) .57 (.64) .61 (.62) .44 (.54) .72 (.75) .07 (.15) .14 (.26) .08 (.07) .06 (.03) .03 (.13) .10 (.21) .06 (.12) .14 (.22) .05 (.17) .04 (.18) .07 (.19) .08 (.05) .09 (.18) .03 (.12) .36 (.48) .04 (.16) .14 (.24) .34 (.43) .23 (.37) .11 (.25) .06 (.22) .12 (.25) .23 (.06) .24 (.40) .17 (.02) .15 (.01) .23 (.37) .26 (.40) .13 (.27) .08 (.09) II .05 (.24) .10 (.23) .04 (.07) .09 (.31) .01 (.16) .09 (.19) .02 (.17) .09 (.24) .08 (.19) .06 (.22) .20 (.33) .63 (.60) .70 (.70) .46 (.52) .54 (.52) .42 (.49) .70 (.70) .41 (.51) .49 (.52) .46 (.50) .41 (.48) .62 (.64) .52 (.56) .52 (.54) .48 (.53) .48 (.57) .59 (.62) .36 (.42) .63 (.66) .47 (.55) .61 (.65) .03 (.22) .12 (.28) .18 (.30) .03 (.19) .28 (.37) .32 (.41) .03 (.16) .07 (.14) .00 (.19) .37 (.49) III .02 (.17) .03 (.21) .06 (.17) .04 (.24) .05 (.11) .06 (.18) .12 (.25) .01 (.18) .06 (.20) .04 (.12) .07 (.14) .15 (.03) .11 (.11) .23 (.34) .08 (.19) .08 (.21) .09 (.12) .21 (.34) .18 (.01) .03 (.12) .07 (.09) .04 (.21) .39 (.48) .05 (.20) .10 (.24) .09 (.14) .15 (.30) .00 (.14) .18 (.07) .11 (.10) .04 (.23) .59 (.62) .49 (.55) .56 (.56) .49 (.56) .53 (.56) .49 (.54) .33 (.41) .35 (.42) .53 (.57) .50 (.58) IV .26 ( .49) .07 (.23) .04 (.15) .35 (.57) .18 (.39) .04 (.22) .09 (.29) .14 (.37) .06 (.19) .28 (.45) .04 (.35) .04 (.17) .03 (.28) .04 (.20) .09 (.07) .17 (.31) .01 (.25) .19 (.33) .17 (.34) .11 (.27) .23 (.36) .00 (.23) .14 (.07) .03 (.18) .10 (.26) .16 (.42) .07 (.16) .10 (.26) .06 (.35) .22 (.44) .03 (.27) .07 (.20) .01 (.17) .04 (.11) .16 (.32) .05 (.07) .02 (.12) .21 (.34) .23 (.36) .07 (.21) .02 (.16) Communalities .65 .62 .58 .65 .55 .27 .41 .53 .62 .50 .70 .51 .66 .46 .40 .37 .56 .52 .61 .38 .34 .46 .58 .40 .53 .57 .47 .31 .65 .54 .50 .48 .72 .52 .49 .61 .58 .55 .65 .44 .58

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

Taylor, Pastor / Conrmatory Analysis of the SACQ

1013

Table 3 (continued)
Item No. 54 62 66 15 16 23 32 34 57 59 60 61 5 6 14 24 27 33 35 44 47 49 55 58 I .15 (.31) .17 (.29) .22 (.37) .23 (.47) .32 (.50) .05 (.27) .07 (.22) .13 (.34) .22 (.38) .02 (.26) .12 (.18) .11 (.18) .15 (.30) .16 (.10) .24 (.25) .24 (.31) .20 (.16) .30 (.33) .01 (.09) .01 (.15) .10 (.21) .01 (.06) .31 (.36) .10 (.10) II .03 (.17) .05 (.21) .12 (.31) .10 (.19) .10 (.15) .12 (.13) .22 (.44) .13 (.32) .26 (.38) .11 (.30) .07 (.31) .09 (.33) .01 (.17) .38 (.35) .14 (.04) .07 (.17) .06 (.04) .15 (.21) .36 (.39) .04 (.12) .05 (.08) .26 (.27) .18 (.27) .19 (.34) III .45 (.51) .36 (.43) .49 (.58) .15 (.29) .05 (.19) .19 (.27) .14 (.28) .06 (.10) .19 (.00) .09 (.06) .13 (.25) .12 (.24) .32 (.38) .06 (.12) .31 (.31) .14 (.22) .17 (.16) .01 (.09) .07 (.17) .29 (.33) .01 (.06) .10 (.17) .18 (.29) .30 (.38) IV .18 (.30) .09 (.23) .07 (.26) .64 (.72) .55 (.64) .59 (.60) .62 (.69) .57 (.65) .43 (.55) .66 (.69) .72 (.73) .70 (.71) .22 (.33) .06 (.02) .09 (.00) .07 (.20) .19 (.10) .02 (.17) .00 (.13) .26 (.30) .34 (.36) .03 (.07) .05 (.15) .26 (.34) Communalities .57 .39 .63 .73 .73 .57 .62 .70 .61 .73 .69 .69 .37 .38 .17 .31 .21 .24 .27 .26 .19 .11 .41 .36

Note: Italicized item numbers are items with structure coefcients that did not reach the specied criteria to be designated as loading on a certain factor. Structure coefcients in bold met the specied criteria for an item to be designated as loading on a certain factor.

items from a single subscale, Academic Adjustment. Not all items from the Academic Adjustment subscale were represented by this factor; the remaining Academic Adjustment items were dispersed throughout Factors II and IV. We attempted to interpret the substantive meaning of the items associated with each of the four factors that emerged from the EFA. Any pattern that we could discern, based on our empirical results and our reading of the items, revealed such complex interpretations that we felt it might be more useful instead to suggest areas of improvement for the instrument. There were 13 items that loaded on at least two factors, suggesting that these items might be the rst to receive attention during instrument revision. Items with low structure coefcients should also be considered for revision or deletion. There were 12 items that yielded low structure coefcients (less than |:40|) consistently across the ve solutions, including the four-factor solution (Items 5, 6, 14, 24, 27, 33, 35, 44, 47, 49, 55, and 58). If the four-factor model is the desired structure, perhaps these 25 items should be the initial focus for

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

1014

Educational and Psychological Measurement

those wanting to pursue instrument revisions. It should be noted, however, that results from the current EFA will likely change with any revision to the instrument. Therefore, data obtained using a revised instrument should be submitted to its own set of analyses to examine the structure of the revised version.

Discussion
The primary purpose of this study was to use conrmatory factor analytic techniques to explore the t of the four-factor model proposed by the authors of the SACQ. Using a large sample of college sophomores, we did not nd evidence supporting the t of the four-factor model. Although alternative models were proposed, these models were more parsimonious and, as such, would not exceed the already inadequate t of the four-factor model. We examined the standardized residuals of the four-factor solution in an attempt to identify mist with disappointing results. The large quantity of standardized residuals greater than |3| halted our ability to detect sources of mist easily. Four separate one-factor models, one for each SACQ subscale, were then t to the data in an attempt to identify problematic subscales or items. Only moderate support for t was found for the Personal-Emotional subscale. However, after inspection of the standardized residuals, we concluded that the one-factor model did not t satisfactorily enough to reproduce the relationships for this subscale. We then used principal axis FA in an attempt to reveal other plausible models that might explain the relationships among the items. Although a four-factor model was discussed, it explained less than half of the total variance in items, indicating that more than half of the total variance among items was unexplained by the factors. This unexplained variance is due either to random measurement error or, more likely, to systematic error arising from other unmeasured factors. Inspection of the structure coefcients of the four-factor solution led us to conclude that (a) several items should be omitted or revised because, across solutions, they consistently did not load on any factor or loaded on more than one factor and (b) the authors assignment of items to subscales should be reconsidered. Specically, each of the four factors consisted of a large proportion of items from a given subscale, with Factors I, II, III, and IV each being largely represented by Social Adjustment, Personal-Emotional Adjustment, Academic Adjustment, and Institutional Attachment items, respectively. Contrary to what the SACQ authors proposed, items currently shared between the Institutional Attachment and Social Adjustment subscales did not have split loadings between Factor I and Factor IV but instead loaded either only on those factors or on the Personal-Emotional Adjustment factor. Whereas Factor III consisted only of Academic-Adjustment items, other Academic Adjustment items were found on Factors II and IV.

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

Taylor, Pastor / Conrmatory Analysis of the SACQ

1015

In sum, our attempts to identify a plausible factor structure for the SACQ items were thorough but unsuccessful. Although we were not able to provide a useful model to explain the intercorrelations among the items, we were able to reject the four-factor model proposed by the authors for 65 of the SACQ items. We also were able to reject separate one-factor models for each of the subscales. We do not advocate the four-factor structure found using the EFA but instead provide the results for readers to have some sense as to how the items were relating to each other and to provide information for those wanting to revise the instrument. After consideration of the literature on college student adjustment, we concluded that revision of the instrument is necessary. It was not incredibly surprising that the four-factor and one-factor models, those models advocated most strongly by the instruments creators, did not t the data given the lack of theory that was used in instrument development. Little information is provided about how the items were developed in 1980 or about the revisions in 1985. Assuming that theory did guide instrument development, the most current version of the instrument was developed 20 years ago and may not reect the current understanding of a students adjustment to college. Possible directions for future researchers include further examination of the college adjustment literature to determine if Baker and Siryks conceptualization is plausible. If Baker and Siryks conceptualization still holds in todays adjustment literature, another direction would be to use backward translation procedures on the current set of items. This would require experts in the eld of college adjustment to examine the items and assign them to the subscale they feel is appropriate. It is prudent to mention some of the limitations of the current study. First, Baker and Siryk (1999) recommend use of the instrument for rst-year students. The sample of students presented here was mostly second-year students or students with between 45 and 70 credit hours. This limitation may be of little consequence as Baker and Siryk pointed out that the instrument was used in a number of studies with non-rst-year students and showed similar results. Nevertheless, future research could replicate this study with a sample of rst-year students. In addition, the sample used here was extremely homogeneous in terms of race and retention. A more heterogeneous sample would likely produce more generalizable results. In addition, replication of this study at an institution where the retention rate is not so high and where levels of adjustment might be lower would further add to the construct validity of the scores obtained from the SACQ. Second, there were issues with nonnormality, particularly multivariate nonnormality. Although univariate examination of skew and kurtosis indicated no problems with normality, when examining Mardias normalized coefcient, we determined that the multivariate normality assumption was most likely violated. These issues could not be addressed using corrections based on the asymptotic covariance matrix (i.e., Satorra-Bentler corrections). Although parameter estimates are not affected by this problem, standard errors and signicance tests are likely to be biased. The violation of the multivariate normality assumption should be kept in

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

1016

Educational and Psychological Measurement

mind when interpreting the t indices found in the present study, yet the large number of standardized residuals greater than |3| for the four-factor model indicate that the model would not t the data even if the Satorra-Bentler corrections had been utilized. We applied cutoffs for the model in determining t, but given the current debate regarding how t indices should be used, readers are encouraged to examine the t indices we have provided in Table 2 along with information regarding (a) standardized residuals, (b) t of the one-factor model, and (c) results of the EFA to decide whether the model showed adequate t. If the t indices are interpreted less rigidly or if one takes into consideration the effect of nonnormality on the t indices, it could be concluded that the four-factor model marginally ts the data. However, the large number of standardized residuals and the additional FAs pursued with the data encourage revision of the instrument, not interpretation of the originally proposed four-factor model. Future research pursued in order to use Satorra-Bentler corrections would require a much larger sample size ( 2,500 to compensate for the large number of observed variables) in order produce the asymptotic covariance matrix used to apply the corrections. As mentioned earlier, it is important for those students facing difculty with the transition from high school to college to make use of whatever resources they have available. But, as Baker and Nisenbaum (1979) found, these students most likely need to be targeted for services as they will not seek them out on their own. Thus, an instrument for diagnosing maladjustment or identifying students in need of services is a worthy goal. At this point, however, we would not recommend use of the SACQ as such a tool. The results of our study do not lend support to the internal validity of the SACQ full-scale and subscale scores. Our recommendations would be, rst, to pursue further theoretical development of the construct and, second, either to revisit the structure of this instrument or create a new instrument to measure adjustment to college.

References
Baker, R., McNeil, O., & Siryk, B. (1985). Expectation and reality in freshman adjustment to college. Journal of Counseling Psychology, 32, 94-103. Baker, R., & Nisenbaum, S. (1979). Lessons from an attempt to facilitate freshman transition into college. Journal of the American College Health Association, 28, 79-81. Baker, R., & Siryk, B. (1980). Alienation and freshman transition into college. Journal of College Student Personnel, 21, 437-442. Baker, R., & Siryk, B. (1984). Measuring adjustment to college. Journal of Counseling Psychology, 31, 179-189. Baker, R., & Siryk, B. (1986). Exploratory intervention with a scale measuring adjustment to college. Journal of Counseling Psychology, 33, 31-38. Baker, R., & Siryk, B. (1999). SACQ: Student Adaptation to College Questionnaire manual. Los Angeles: Western Psychological Services. Benson, J. (1998). Developing a strong program of construct validation. Educational Measurement: Issues and Practice, 17, 10-17, 22-23.

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

Taylor, Pastor / Conrmatory Analysis of the SACQ

1017

Benson, J., & Nasser, F. (1998). On the use of factor analysis as a research tool. Journal of Vocational Education Research, 23, 13-33. Bentler, P. M., & Wu, E. J. C. (1993). EQS/Windows users guide: Version 4. Los Angeles: BMDP Statistical Software. Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS: Basic concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum. Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-t indexes for testing measurement invariance. Structural Equation Modeling, 9, 233-255. Conti, R. (2000). College goals: Do self-determined and carefully considered goals predict intrinsic motivation, academic performance, and adjustment during the rst semester? Social Psychology of Education, 4, 189-211. Fan, X., & Thompson, B. (2001). Condence intervals about score reliability coefcients, please: An EPM guidelines editorial. Educational and Psychological Measurement, 61, 517-531. Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum. Gorsuch, R. L. (1990). Common factor analysis versus component analysis: Some well and little known facts. Multivariate Behavioral Research, 25, 33-39. Henson, R. K., & Roberts, J. K. (2006). Use of exploratory factor analysis in published research: Common errors and some comment on improved practice. Educational and Psychological Measurement, 66, 393-416. Hertel, J. (2002). College student generational status: Similarities, differences, and factors in college adjustment. Psychological Record, 52, 3-18. Hook, R. J. (2004). Students anti-intellectual attitudes and adjustment to college. Psychological Reports, 94, 909-914. Hu, L., & Bentler, P. M. (1995). Evaluating model t. In R. H. Hoyle (Ed.), Structural equation modeling: Concepts, issues, and applications (pp. 76-99). Thousand Oaks, CA: Sage. Hu, L., & Bentler, P. M. (1998). Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecication. Psychological Methods, 3, 424-453. Hu, L., & Bentler, P. M. (1999). Cutoff criteria for t indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1-55. reskog, K., & So rbom, D. (2003). LISREL (Version 8.54) [Computer software]. Chicago: Scientic Jo Software International. Kelloway, E. K. (1995). Structural equation modeling in perspective. Journal of Organizational Behavior, 16, 215-224. Kline, R. B. (1998). Principals and practice of structural equation modeling. New York: Guilford. Krotseng, M. V. (1992). Predicting persistence from the Student Adaptation to College Questionnaire: Early warning or siren song? Research in Higher Education, 33, 99-111. Marsh, H. W., Hau, K.-T., & Grayson, D. A. (2005). Goodness of t in structural equation modeling. In A. Maydeu-Olivares & J. McArdl (Eds.), Contemporary psychometrics: A festschrift for Rod McDonald (pp. 275-340). Mahwah, NJ: Lawrence Erlbaum. Marsh, H. W., Hau, K.-T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis testing approaches to setting cutoff values for t indexes and dangers in overgeneralizing Hu and Bentlers (1999) ndings. Structural Equation Modeling, 11, 320-341. Meehan, D.-C.-M., & Negy, C. (2003). Undergraduate students adaptation to college: Does being married make a difference? Journal of College Student Development, 44, 670-690. Mooney, S. P., Sherman, M. F., & LoPresto, C. T. (1991). Academic locus of control, self-esteem, and perceived distance from home as predictors of college adjustment. Journal of Counseling and Development, 69, 445-448. National Center for Education Statistics. (2003). The condition of education: Section 1, Participation in education. Retrieved January 20, 2004, from http://nces.ed.gov/pubs2003/2003067_1.pdf

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

1018

Educational and Psychological Measurement

Olsson, U. H., Foss, T., Troye, S. V., & Howell, R. D. (2000). The performance of maximum likelihood, generalized least squares, and weighted least squares estimation in structural equation modeling under conditions of misspecication and nonnormality. Structural Equation Modeling, 7, 557-595. Olsson, U. H., Troye, S. V., & Howell, R. D. (1999). Theoretic t and empirical t: The performance of maximum likelihood versus generalized least squares estimation in structural equation models. Multivariate Behavioral Research, 34, 31-58. Schwitzer, A. M., Robbins, S. B., & McGovern, T. V. (1993). Inuences of goal instability and social support on college adjustment. Journal of College Student Development, 34, 21-25. Spady, W. (1971). Dropouts from higher education: Toward an empirical model. Interchange, 2, 38-62. SPSS. (2003). Statistical package for the social sciences [Computer software]. Chicago: Author. Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics (4th ed.). Boston: Allyn & Bacon. Terenzini, P., & Pascarella, E. (1977). Voluntary freshman attrition and patterns of social and academic integration in a university: A test of a conceptual model. Research in Higher Education, 6, 25-43. Thompson, B., & Daniel, L. G. (1996). Factor analytic evidence for the construct validity of scores: A historical overview and some guidelines. Educational and Psychological Measurement, 56, 197-208. Tinto, V. (1996). Reconstructing the rst year of college. Planning for Higher Education, 25, 1-6. West, S. G., Finch, J. F., & Curran, P. J. (1995). Structural equation models with nonnormal variables. In R. H. Hoyle (Ed.), Structural equation modeling: Concepts, issues, and applications (p. 57-75). Thousand Oaks, CA: Sage.

Downloaded from http://epm.sagepub.com by Roberto Hernandez Sampieri on October 12, 2009

Anda mungkin juga menyukai