Anda di halaman 1dari 14

:

Robert T. Blackburn; Mary Jo Clark, An Assessment of Faculty Performance: Some Correlates Between Administrator, Colleague, Student and Self-Ratings, Sociology of Education, Vol. 48, Issue 2 (Spring, 1975), pp. 242-256.

An Assessment of Faculty Performance: Some Correlates Between Administrator, Colleague, Student and Self-Ratings
Robert T. Blackburn Center for the Study of Higher Education The University of Michigan Mary Jo Clark Educational Testing Service Princeton, New Jersey
This paper addresses the uncertainties surrounding the evaluation of faculty work performance and reviews the conflicting studies of the two principal professorial roles, teaching and research. It then presents the findings from a study of the faculty at a typical liberal arts college. The 45 full-time (85 percent of the population) faculty members were rated by administrators, faculty colleagues, students, and themselves on two performance measures: (1) teaching-effectiveness and (2) overall contribution to the college. While reasonably high agreement exists between faculty peers and students on their assessment of professors and to a lesser (but still statistically significant) extent between administrators and these two role sets-the near zero correlations between the professor and each role set are causes for genuine concern. Related research supports a psychological explanation of self-misperceptions.

Faculty members complain more about the manner in which their work is judged and rewarded than about any other dimension of their professorial role (Guthrie, 1949; Theophilus, 1967). They worry about tenure, promotion, and merit increases. Often faculty members believe that deserved honors come too late, if at all. The professor's anguish is not surprising. Most academics sincerely believe they are performing at levels higher than those for which they receive institutional and personal recognition. Furthermore, professors can document their frustrations with respect to the assessment of their worthignorance on the part of the evaluators (Gustad, 1967: 270). Those who pass judgment seldom witness a performance.

THE PROBLEM AND ITS BACKGROUND


Teaching While a popular theory says the professor assigns a low value to teaching, studies reveal just the opposite is in fact the case. Faculty members give their teaching highest priority, as. does then-college (Cartter, 1967). In addition, teaching is their greatest source of pleasure (Gaff and Wilson, 1971: 195). But how is a professor's pedagogy to be judged when there still is no acceptable definition of "good" teaching (McKeachie, 1967; 1970; Biddle and Ellena, 1964;

Rothwell, n.d.)? Furthermore, neither deans, department chairperson, nor the professor's peers ever see him teach. Even if student evaluation forms are used and are available to administrators, deans will not publicly claim that those who are in the role of apprentices are qualified to judge those who have credentials as masters (Kent, 1967: 316). Some faculty sincerely and vociferously protest student evaluation (Bryant, 1967; Hildebrand, 3Jffl2). Objective data are rarely on hand when a judgment on teaching performance is rendered. The little available evidence linking student-judged teaching effectiveness and student learning has been positive, but not strong (McKeachie, 1969). However, M. Rodin and B. Rodin (1972) have reported a high inverse relationship (r = -.75). More recently, Gessner (1973) obtained correlations reaching .77 between student-judged teaching effectiveness and performance on a national examination. The conflicting outcomes increase faculty uncertainties and heighten concerns about the validity and reliability of evidence being used when they are assessed. Research Scholarly output is supposedly a more objective dimension of professorial value. At least publications can be assessed, but the extent to which careful- critiques are actually performed remains debatable. Besides, inferring teaching effectiveness from research productivity is difficult. First of all, expert opinion differs widely. On the one side, Jencks and Riesman (1968: 532) believe research is essential for vital teaching, the view many faculty support. On the other hand, Fischer(1968: 10) claims the opposite, an opinion students vent when they say a professor's teaching is poor because the time he might have spent in preparation for class has gone into unrelated scholarship. The Relationship of Teaching to Research The studies conducted on the relationship between research and teaching show either no relationship or at best a slight positive association. In research at the University of Washington, Purdue, Kansas State, Carnegie-Mellon, Wayne State, and a midwest liberal arts college, very low correlations were found between student-judged teaching effectiveness and research activity (Voeks, 1962; Feldhusen and McDaniels, 1967; Hoyt, 1970; Hayes, 1971; Blackburn, 1972; Harry and Goldner, 1972; Sherman, 1973). Other contemporary investigations report small, positive correlations at Tufts, Illinois, California-Davis, and Purdue (Bresler, 1968; Stallings and Singhal, 1970; Hildebrand and Wilson, 1970; McDaniel and Feldhusen, 1970). Hence, over the entire spectrum of high to low research-oriented college and university faculty, the relationship between teaching and research is at best slight and most often not significant. Predicting a faculty member's performance in either role from the other is mainly a matter of chance.1 Agreements Between Evaluators Yet, administrators and faculty members persist in assuming a positive relationship between "Beaching and research as they make judgments about promotions, tenure, and salary increases (Maslow and Zimmerman, 1956; Luthans, 1967). Even the professor assumes a single conception of academic worth which specifies that if a
1

Servicecommittee work, advising, community relations, and expert professional assistancethe last of the troika of faculty roles, remains even more subjective. The value faculty place on it is unknown. Equally uncertain is the way those who reward faculty assess it. The ambiguity increases faculty apprehension regarding those who hold power to sanction. The service role is not measured in this research.

colleague is doing research, his classes are ipso facto superior (Hammond, Meyer, and Miller, 1969). At the same time, the faculty will differ with administrators on what criteria are critical. For example, Hussain and Leestamper (1968) discovered that four of the ten (out of sixty) criteria students and faculty agreed to be most important for good teaching were not in the set of criteria administrators used on their merit rating form for assessing faculty performance whereas four of the ten criteria students and faculty believed least important were considered for merit ratings. Attitudes toward students are an example of the first; research and publication fall in the second. In a study by Crawford and Bradshaw (1968), each of ten subgroupsassistant professors and instructors, associate and full professors, department chairmen, deans, and six student groups divided by sex and three levels of abilitydiffered in a statistically significant way from all other subgroups in the rating given to the most important characteristics of effective university teaching. The thirteen teacher characteristics groups rated ranged from knowledge of the subject and organization of lectures to sense of humor and punctuality. Birnbaum (1966) found inconsistencies in faculty evaluation at the community-college level. Promotion and Merit Raises Finally, two studies underscore the faculty's genuine concern about how they are assessed and rewarded. Luthans (1967) found that while deans, department chairmen, and other administrators believe that teaching is the most important function of the faculty, and that the faculty agree, administrators confess that promotion is judged on other criteria, e.g., research. However, Luthans uncovered no relationship between research and promotion. Over half the administrators reported they sometimes approved promotions to full professor in the absence of a significant publication record. Similarly at Kansas State, Hoyt (1970) found no significant relationships between either rate of promotion or receipt of merit raises and either teaching effectiveness or publication regard. Unpublished studies in three departments at the University of Michigan likewise failed to yield a promotion pattern with either publicly available student-judged teaching effectiveness or publication record. Time-in-rank appears to suppress any relationship. Quite likely a department finds it very difficult to 'promote an individual with less timein-rank than a colleague with longer (or even equal) tenure, especially in a pool of able faculty members. The difference in performance would have to be dramatic to alter the order of promotion. Such views are apparent to the outsider but, of course, are seen differently by the individual who believes he deserves a promotion and/or merit increase irrespective of time on the job and/or how well a colleague is performing. So, a definite basis exists for faculty members' complaints about matters of recognition for their efforts. Whether they are judged well or poorly matters, of course. Equally important, however, is their firm conviction that they are not judged properly. Improving the assessment process is therefore extremely important. Sorting out fact from folklore as well as from unfounded belief is the first step. While several studies of faculty teaching effectiveness have utilized student evaluation, and a few have used peer ratings, this research combines these two sources and introduces both administrative and self evaluations. This study also expands the notion of faculty work performance to include dimensions other than teaching effectiveness and publications. A global rating on overall contribution to the college serves as an independent measure. The findings provide insights into faculty beliefs. The unexpected results also have serious implications for behavior within colleges and universities.

SETTING AND METHODOLOGY


The study was conducted at "Midwest" college. Its fifteen hundred students encompass a full range of interests and academic qualifications. A moderately welltrained faculty staffs the typical departments. While participation from the approximately twenty-five part-time faculty members was respectable, other factors, led to restricting the analyses to the full-time faculty. Forty-five of the fifty-three faculty members (85 percent) in the latter category responded to all measures. As Midwest grew, it experienced a separation from its founding church both in support and in control. While drawing its student body principally from its own and a contiguous state, Midwest how is attracting youth from the more heavily populated East and Northeast. Student body SAT scores are exactly at the national norm for collegebound youth. Like other liberal arts colleges of both higher and lower selectivity, the two largest groups of students are preparing for teaching and business careers. In these and other ways, Midwest is like many other American colleges and is almost average for the more than eight hundred private and church-related liberal arts colleges. And, since the principal faculty roles are teaching and contributing to the organization. Midwest is not unlike many emerging state colleges arid universities except, of course, with respect to size. Having convinced themselves that self-analysis was necessary for major change. Midwest faculty members willingly participated in a series of self-studies. Among other things^ they rated their colleagues and themselves on teaching effectiveness and overall contribution to the college. Midwest's administrators also rated faculty members on both dimensions of performance. Student evaluations of teaching effectiveness were also obtained. A sample of faculty participated in a second set of ratings to provide an estimate of measurement reliability. Specifically, each faculty member rated every other teacher in his curricular division and himself on a five-point scale of "teaching effectiveness." The faculty member was told to "consider those qualities which are important in the evaluations^ the skills and practices and products of a classroom teacher regardless of rank or experience or teaching of the person being rated." In a similar way, each faculty member judged himself and each colleague on a five-point scale concerning his "overall "contribution" to Midwest College. Administrators rated faculty in both categories. The rater was told to "take into account the person's total contribution, whether his own work or his stimulation of others, whether scholarly or administrative or in human relations; the person's overall usefulness in helping the college carry out its responsibilities." Cartter's (1966) methodology was employed. An intensive analysis was conducted to establish the reliability and the validity of the technique. While expert value judgments are on quality of effectiveness of outcomes, not on specific behaviors, the validity requirement of logical relevance was still met. Also, the validity measure of the consistency of rating across raters with different sources of information and perspectives was satisfied. Reliability was ascertained by the test-retest method as well as by multifactor one-way analysis of variance to obtain estimates of person and error variance. Values ranged from .72 to .86 but were lower on self estimates, a single measure. Also, by using role theory and other data (demographic, selected psychological traits, job satisfaction, professional attitudes, and perceived stress), other required statistical tests were satisfied. Student evaluations of teaching effectiveness were obtained from a standard 14item five-point scale questionnaire that the college systematically employs to evaluate all courses each semester. (The high correlations that exist between different scales (e.g., see

Sherman) made it unnecessary to intervene in the professor's classroom to obtain an additional measure of teaching effectiveness.) Responses to the question "How would you rate your instructor in teaching effectiveness?=were averaged across all courses taught by a faculty member during one semester for an index of his teaching performance as judged by students.

FINDINGS
The intercorrelations are collected in Table 1. Different ratings on the two performance dimensions demonstrate discrimination was made between teaching effectiveness and overall contribution to the college by the same group of raters. Secondly, teaching effectiveness as judged by the faculty correlated significantly with similar ratings by administrators and students. (The correlations between administrator and student ratings on teaching effectiveness, although lower (.47), is also statistically significant.) However, among self-ratings with other role sets on the same trait, only colleague ratings on overall contribution to the college demonstrate a significant relationship (.45). Self-ratings on teaching effectiveness have near-zero correlations with ratings for each of the other three groups of raters.

TABLE 1 Rated Teaching Effectiveness and Rated Overall Contribution to the College As Evaluated by Professors, Administrators, Self, and Students in Classes
Professors Administrators Self 1 2 Teaching Contribution Teaching Contribution Teaching Contribution (N=45) (N=45) (N=45) (N=45) (N=45) (N=41) Professors Teaching Overall Contribution Administrators Teaching1 Overall Contribution2 Self Teaching Overall Contribution Students Teaching
1 2

Students Teaching3 (N=45)

(.72) .494 .634 .17 .28 .334 .624

(.86) .24 .544 .344 .454 .24 () .434 .10 .07 .474

() .13 .15 -.04 () .724 .19

() -.07 ()

The administrator rating on teaching effectiveness is the mean of ratings by the appropriate division chairman and the academic dean. The administrator rating on overall contribution to the college is the mean of ratings by the president, academic dean, and assistant dean. 3 The student rating is a mean of course evaluation responses to the question: "How would you rate your instructor in teaching effectiveness?" 4 Correlaffons are significantly different from zero at or above the 95 percent level of confidence.

Professors appear to view their own teaching effectiveness and overall contribution to the college in nearly interchangeable ways (r = .72). Their performance in one role is viewed as being much the same as their performance in the other. At the same time, they make clear distinctions between the two roles when rating their peers. Selfperceptions of teaching effectiveness and judgments on this same dimension by colleagues, students, and administrators (0.28, 0.19, and 0.10, respectively) have low commonality. However, fair agreement exists among the three independent groups craters (.63, .62, and .47). Self and colleague ratings agree on overall contribution to the college (.45). Between self and administrator ratings on this dimension, the key relationship of the faculty member's concern, the correlation, however, is very low, 0.15. The data of Table 2 support an extension of these findings beyond the uniqueness of Midwest. Intercorrelations on teaching effectiveness from Maslow and Zimmerman (1956), Choy (1969:59), and Centra (1972) are placed along side those from Table 1. Maslow and Zimmerman have one overlap with our study, students and professors, and their correlation of .69 compares with our .62. Their population was 86 faculty members at Brooklyn College. Centra similarly has a single intersection, students and self. His .21 coefficient compares with our .19. Centra's figures are on 343 faculty members in five institutions-two state colleges (one primarily black), a highly selective liberal arts college, a multipurpose university, and a community college. Choy has a full set of comparisons in his state university. Thirty-two faculty members in psychology serve as the sample; the administrator rating is the department head. The very high agreement in the coefficients at both the high and the low values lends considerable weight to our findings, not only with respect to teaching but also to the other principal faculty role. The studies are not replications of a single design and hence are not pure corroborations. They do, however, provide confidence in drawing conclusions, in probing for reasons, and in drawing implications.

CONCLUSIONS
The following conclusions can be drawn from the data: 1. Apparently considerable variation in the factors enters into performance judgments as they are made by colleagues, students, administrators, and self. 2. Self ratings on the same performance dimension show only slight agreement with ratings made by faculty colleagues and almost no relationship with judgments made by administrators. 3. Colleague judgments about teaching performance are positively related to their judgments about the overall contribution of a faculty member. 4. Professor and student judgment about teaching performance are in substantial agreement. Additionally, the data support the use of ratings by both professors and students in the evaluation of faculty performance. At the same time, they demonstrate a reason why an individual faculty member often claims that his work is not properly appreciated. Furthermore, this feeling is most likely to arise when decisions about his future are being made primarily by administrators, because his perception of his performance shows the least relationship to their judgments. The professor lives with an erroneous perception of how others perceive and assess him.

DISCUSSION AND IMPLICATIONS


The reasons for the low correlations between self and other role sets are not immediately obvious. The sample size precludes subanalyses other than by visual 7

inspection. The idea that older faculty members are better judges of self is not supported by the data. Neither is the notion that tenured, and hence more secure, faculty more accurately judge their performance than others do. Scatter diagrams indicate neither relationship. These outcomes are consistent with Blackburn's (1972) reported zero correlations between teaching effectives and both age and rank. Faculty members give themselves higher ratings on teaching effectiveness (on the average) than both others accord them and than they give their peers. Mann (1973), Clark and Blackburn (1973), and Centra (1972) also find this phenomenon of self over-rating. The shift to the high end of the scale reduces the range and hence slightly suppresses the correlations. However, the inflated ratings are essentially random. That is, there is no '' systematic displacement of scores by individuals. In fact, some rated high by others greatly underrate themselves. TABLE 2 Intel-correlations on Teaching Effectiveness Between Professors, Administrators, Students, and Self From Four Studies1
Professors Professors Administrators Self Students
1

Administrators

Self

Students

.632 (.69)2 .28 (.17) .622 (.78)2 [.69] .10 (-.02) .472 (.68)2 .19 (.07) [.21]

Unenclosed coefficients are from this study (Table 1). Those in parentheses are from Choy (1969:59); the correlation in the braces is from Maslow and Zimmerman (1966); and the one in brackets comes from Centra (1972). 2 P<.05.

Sorey (1967) found superior rated teachers were no more accurate judges of themselves than those judged below average. Between student and self-ratings on the Guilford-Zimmerman Temperament Survey, Sorey (1967:55) reports differences at the .01 level on seven of the ten factors. Isaacson, McKeachie, and Milholland (1963) found the only peer rating holding statistically significant over four trials was colleague ratings on an attribute designated as "general culture" (artistically sensitive vs. insensitive; intellectual vs. unreflective, narrow; polished, refined vs. crude, boorish; imaginative vs. simple, direct). Peer and self correlations with student-judged teaching effectiveness were never high and were irregular from trial to trial. Again, the individual professor is not a good judge of this role performance. Another insight into the low self correlation coefficients can be gained by juxtaposing an earlier study with research currently in progress. Using data from the 1940's, Maslow and Zimmerman (1956) asked: "Is creativeness (research, activity in the field, and writing) in a college teacher positively or negatively correlated with good teaching?" Faculty members at a metropolitan university were asked to rate colleagues in their own department on carefully defined dimensions labeled "teachers," "personalities," and "creativeness in the field." Students were instructed to rate their professors first "as a teacher" and then "as a personality," defining these terms in their own way. Intercorrelations and estimated reliabilities are presented in Table 3.

The pattern of the intercorrelations in this study shows good agreement between colleague ratings "as a teacher" and "for creativeness" and good agreement between student ratings "as a teacher" and "as a personality." Faculty colleagues and students also agree substantially in the evaluation of teaching (r = .69) despite different instructions in their rating tasks. But they seem to have quite different conceptions of personality. This validity coefficient is onfefc.40, lower than the monorater correlations across traits and lower than the correlation of student-rated personality with colleague-rated teaching or creativeness. TABLE 3 Maslow and Zimmerman Intercorrelations1
(Colleague) Professor Ratings Teaching Personality (Colleague) Professor on Teaching (Colleague) Professor on Personality (Colleague) Professor on Creativity Student on Teaching Student on Personality
1

Student Ratings Teaching Personality

Creativity

(.86) .68 .77 .69 .69 (.77) .61 .29 .40 (.86) .5l2 .41 (.96) .76 (.94)

Maslow and Zimmerman do not report confidence levels. However, with N = 86, all have p < .06 with only the .29 near the limit. 2 The .51 correlation between teaching effectiveness and peer-judged creativity is much higher than the correlations between teaching and research reported at the outset of this article. But perceived creativity in another and that person's output are not equivalent (see Sherman, 1973).

Sherman (1973) obtained a correlation of .77 and .86 between two independent student measures of teaching effectiveness and four factor-analyzed scales from an Osgood Semantic Differential completed by students, values nearly identical to the .76 in Table 3. What is being correlated store are responses to statements like "are lectures well organized" and "are objectives clear" with "bold-timid," "gracious-crude," "aggressiveunaggressive," "sensitive-insensitive," and so on.2 To the individual professor, the former (clear assignments) must seem manageable, even changeable. Much less so are the personal traits. Also, Osgood characteristics are more likely to be assessed differently by the self. If student-judged teaching contains a high personality loading, then the low correlations between self and others are less surprising. Maslow (1962) aids us here, too. Within psychological theory with regard to self, many explanations can be advanced. For example, the very high value faculty members place on their teaching work role is sufficient by itself to release normal defense mechanisms when they are asked to assess themselves (Maslow, 1962: 42-64). Clearly more research is needed in this critical area where faculty careers are involved.

Lewis (1964) found no personality interaction among faculty members and students with regard to effectiveness of teaching in departments of agriculture, English, and home economics at Iowa State. He also concluded that personality variables (from Guilford-Zimmerman Temperament Survey) do not differentiate more effective from less effective teachers. However, his faculty members are self rated and hence can be confounding the puzzle under investigation.

Meanwhile, practical actions can be taken in the interim. Social scientists deal with such discrepancies in perception in a variety of well-known ways information exchange, T-groups, and performance appraisals. Such procedures are our recommendations, too, but we advise caution. Similar efforts in the business world have had minimal success. Other procedures would have students, faculty, and administrators meet together and inspect the data. Apparently communication lines have not been established to detail what each of the subgroups expects of a professor to say nothing of whether or not any human being can satisfy people who have very diverse, even conflicting, demands. Conversations regarding expectations are the very least that must be done. Too often colleges and universities assess the faculty member just before an important decision with respect to his and the institution's future must be made. The Personnel Committee meets in December before AAUP deadlines; this can be its first formal meeting on the individual. A negative judgment at that time leaves no alternatives. No corrections can be made; no learning takes place. Not only is such a procedure psychologically harmful to all parties; it also is contrary to the aims of the institution as a human organization. In those few colleges and universities where assessment of faculty is a regular process (as opposed to a final judgment), evaluation devices are not tests and/or final examinations on which all hinges. When improvement of teaching is the aim, then the institution is an educational rather than a punitive one. Faculty members visit other professor's classes and have their own observed. Coffee afterwards allows immediate reaction. Student opinion is sought along the way and openly shared, not done in the absence of the professor in sealed envelopes unavailable until grades are in at the close of the term. Certainly such processes can mitigate the uncertainties and frustrations haunting many faculty members when their efforts are assessed. Additional procedures are needed in these times of stress. References Biddle, B. J., and W. J. Ellena (eds.) 1964 Contemporary Research on Teacher Effectiveness. New York: Holt, Rinehart and Winston. Birnbaum, Robert. 1966 "Background and evaluation of faculty in New York: a survey of twentyseven colleges indicates some inconsistencies in faculty evaluation." Junior College Journal 37 (November): 34-37. Blackburn, Robert T. 1972 Tenure: Aspects of Job Security on the Changing Campus. Southern Regional Education Board, Research Monograph No. 19, Atlanta: July, 1972. Bresler, Jack B. 1968 "Teaching effectiveness and government awards." Science 160 (April 12): 164-167. Bryant, Paul T. 10

1967

"By their fruits ye shall know them." Journal of Higher Education 38 (June): 326-330.

Cartter, Alien M. 1966 An Assessment of Quality in Graduate Education. Washington, D.C.: American Council on Education. 1967 "College teachers: quantity and quality." Pp. 114-145 in Calvin Lee (ed.), Improving College Teaching. Washington, D.C.: American Council on Education. Centra, John A. 1972 "Self-Ratings of College Teachers: A Comparison with Student Ratings." Research Bulletin RB-72-73. Princeton, New Jersey: Educational Testing Service. Choy, Chunghoon. 1969 The Relationship of College Teacher Effectiveness to Conceptual Systems Orientation and Perceptual Orientation. Unpublished Ed.D. dissertation, Colorado State College. Clark, Mary Jo, and Robert T. Blackburn. 1973 "Faculty performance under stress." Pp. 233-252 in Alan L. Sockloff (ed.) Faculty Effectiveness as Evaluated by Students. Philadelphia: Temple University. Crawford, P. L., and H. L. Bradshaw. 1968 "Perception of characteristics of effective university teachers: a scaling analysis." Educational and Psychological Measurement 28 (Winter): 10791085. Feldhusen, John F., and Ernest D. McDaniels. 1967 "Study on Teaching Effectiveness." LaFayette, Indiana: Purdue Educational Research Center, mimeographed) Fischer, John 1968 "The case for the rebellious students and their counterrevolution." Harpers 237 (August): 9-12. Gaff, Jerry G., and Robert C. Wilson. 1971 "Faculty cultures and interdisciplinary studies." Journal of Higher Education 42 (March): 186-201. Gessner, Peter K. 1973 "Evaluation of instruction." Science 180 (May): 566-569. Gustad, John W.

11

1967

"Evaluation of teaching performance: issues and possibilities." Pp. 265281 in Calvin Lee (ed.), Improving College Teaching. Washington, D.C.: American Council on Education.

Guthrie, E. R. 1949 "The evaluation of teaching." Educational Record 30 (April): 109-115. Hammond, Phillip E., John Meyer, and David Miller. 1969 "Teaching versus research: sources of misperceptions. "Journal of Higher Education 40 (December): 682-690. Harry, Joseph, and Norman S. Goldner. 1972 "Null relationship between teaching and research," Sociology of Education 45 (Winter): 47-60. Hayes, John R. 1971 "Research, teaching, and faculty fate." Science 172 (April 16): 227-230. Hildebrand, Milton. 1972 "How to recommend promotion for a mediocre teacher without actually lying." Journal of Higher Education 43 (January): 44-62. Hildebrand, Milton, and Robert C. Wilson. 1970 "Effective University Teaching and Its Evaluation." Center for Research and Development in Higher Education, University of California, Berkeley. Hoyt, Donald P. 1970 "Institutional effectiveness: III. Inter-relationships with publication record and monetary reward." Research Report 10 (May). Manhattan: Office of Educational Research, Kansas State University. Hussain, K. R., and Robert Leestamper. 1968 "Survey of Criteria of Teaching Effectiveness at New Mexico State University." Mimeographed Report: Las Cruces (June). New Mexico State University. Isaacson, Robert L, Wilbert J. McKeachie, and John Milholland. 1963 "Correlation of teacher personality variables and student ratings." Journal of Educational Psychology 64 (April): 110-117. Jencks, Christopher, and David Riesman. 1968 The Academic Revolution. Garden City, New York: Doubleday. Kent, Laura. 1967 "Student evaluation of teaching." Pp. 312-343 in Calvin Lee (ed.), Improving College Teaching. Washington, D.C.: American Council on Education. Lewis, Edwin C. 1964 "An investigation of student-teacher interaction as a determiner of effective teaching." Journal of Educational Research 57 (March): 360-363. 12

Luthans, Fred. 1967 The Faculty Promotion Process and Analysis of the Management of Large State Universities. Iowa City, Iowa: Bureau of Business and Economics Research, College of Business Administration, The University of Iowa. McDaniels, Ernest D., and John F. Feldhusen. 1970 "Relationships between faculty ratings and indexes of service and scholarship." Proceedings, 78th Annual Convention, American Psychological Association: 619-620. McKeachie, Wilbert J. 1967 "Research in teaching: the gap between theory and practice." Pp. 211-239 in Calvin Lee (ed.). Improving College Teaching. Washington, D.C.: American Council on Education. 1969 "Student ratings of faculty." AAUP Bulletin 55 (December): 439-444. 1970 "Research on college teaching: a review." ERIC Clearinghouse on Higher Education, Report No. 6 (November). Washington, D.C.: The George Washington University. Maslow, A. H., and W. Zimmerman. 1956 "College teaching ability, activity, and personality." Journal of Educational Psychology 47 (March): 185-189. Maslow, Abraham H. 1962 Toward a Psychology of Being. Princeton: Van Nostrand. Rodin, Miriam, and Burton Rodin 1972 "Student evaluations of teachers," Science 177 (September): 1164-1166. Rothwell, C. Easton n.d. The Importance of Teaching: A Memorandum to the New College Teacher. Report to the Committee on Undergraduate Teaching, New Haven, Connecticut: The Hazen Foundation. Sherman, Barbara R. 1973 Effectiveness of College Teachers as a Function of Faculty Characteristics and Student Perception. Unpublished Ph.D. dissertation, The University of Michigan. Sorey, K. E. 1967

"A study of the distinguishing personality characteristics of college faculty who are superior in regard to the teaching function." Unpublished Ed.D. dissertation, Oklahoma State University.

Stallings, William M., and Sushila Singhal. 1970 "Some observations on the relationships between research productivity and student evaluations of courses and teaching." The American Sociologist 5 (May): 141-142.

13

Theophilus, Donald R., Jr. 1967 Professional Attitudes Toward Their Work Environment at the University of Michigan: A Study of Selected Incentives. Unpublished Ph.D. dissertation, The University of Michigan. Voeks, Virginia W. 1962 "Publications and teaching effectiveness." Journal of Higher Education 33 (April): 212-218.

14

Anda mungkin juga menyukai