Anda di halaman 1dari 12

Alternative Measures of System Effectiveness: Associations and Implications Author(s): Ananth Srinivasan Reviewed work(s): Source: MIS Quarterly,

Vol. 9, No. 3 (Sep., 1985), pp. 243-253 Published by: Management Information Systems Research Center, University of Minnesota Stable URL: http://www.jstor.org/stable/248951 . Accessed: 01/02/2012 12:46
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

Management Information Systems Research Center, University of Minnesota is collaborating with JSTOR to digitize, preserve and extend access to MIS Quarterly.

http://www.jstor.org

Measuresof System Effectiveness

AlternativeMeasures of System Effectiveness: Associations and Implications


By: Ananth Srinivasan Assistant Professor Operations and Systems Management Indiana University Bloomington, Indiana

Introduction
Measurement of the effectiveness of a management information system (MIS) is an issue that has generated much debate and consequent research interest over the years. The most recent among a series of indicators of such interest is the initiation of roundtable discussions at the annual Conference on Information Systems where one of the proposed topics for 1985 is "Measuring and Improving Information System Effectiveness/Productivity." The spectrum of approaches that have been suggested to deal with this complex issue presents a bewildering array to a researcher whose intention is to include MIS effectiveness as a dependent variable in a study, or to a practicing manager who wants to get a clear indication of the quality of the MIS being used. Approaches that have been advocated include MIS usage estimation [4], user satisfaction [2], incremental performance in decision making effectiveness [16], cost-benefit analysis [15], information economics [20], utility analysis [17], the analytic hierarchy approach [22], and information attribute examination [6]. While acknowledging the importance of economic analyses of MIS value, researchers responded to the shifting emphasis from efficiency to user effectiveness by focusing either on MIS usage or user perceived effectiveness. Much of the MIS literature of late uses one or the other as the dependent variable of interest. Briefly, the MIS usage approach uses behavioral indicators as surrogates for MIS effectiveness. Examples of such indicators are the number of reports generated, the number of changes made to a file, connect time, etc. The perceived effectiveness approach uses measures of effectiveness as perceived by users of the system. Examples of such measures include user satisfaction, perceived system quality, etc. The literature is replete with arguments both for and against the use of these two approaches. Providing a typical argument for the system usage approach, Ein-Dor and Segev [4] state: "[Various] criteria [for success that are mentioned in the literature] are clearly mutually dependent; profitability is correlated with performance, application to major problems, and actual use. We claim that a manager will use a system inten-

Abstract
Thisarticlereportsresults froma study that examined the implementation computerized of modelingsystems in 29 organizations.Thefocus is on the use of various MISeffectiveness measures that are reportedin MIS research. Specifically, we examine the relationship between user perceived effectiveness measures (user measures of system effecsatisfaction)and behavioral tiveness (system use). Whilemuch of the existing MISresearch implies that the two types of measures are positively associated with each other, the results from this study indicate otherwise. By using a perceived effectiveness instrument that is stronglygrounded in a widely accepted theoretical model, the results provide important insightsinto the natureof this relationship. The importance of interpreting perceived and behavioral measures of system effectiveness is demonstrated by examining the effect of system sophisticationon MISeffectiveness. MISeffectiveness measurement,manageKeywords: ment of information systems ACMCategories: K.4.3, K.6.0, K.6.4.

MIS Quarterly/September 1985

243

Measures SystemEffectiveness of

sively only if it meets some of the criteria, and that use is highly correlated with them" [4, pp. 1065-1066]. Ginzberg [8] argued against the system usage approach by stating that the link between system usage and the qualityof decision making was a weak one. If one views the system as a service (instead of a product)that is designed to enable managers to performmore effectively, the extent of use measure would be a very misleading indicatorof success. Based on these assertions then, and his approachto the issue in ensuing research [7], Ginzberg advocated the user perceived effectiveness approach. Citingsituations where system usage may and may not be an appropriate measure of MIS effectiveness, Ives, Olson, and Baroudi[13]suggest that the use of both approaches (system usage and user perceivedeffectiveness) may be warrantedin many situations. It is apparentthat both system usage and user perceived effectiveness play key roles in determining the effectiveness of an MIS. It would seem, then, that the relationshipbetween the twowouldbe of interestto researchersand practitioners alike in an attempt to examine correlates between effectiveness measurement approaches. From a researcher's perspective it is important to understand relationships between several competing surrogates of (purthe portedly) same phenomenon. Froma practitioner's perspective, it is importantto understand what exactly is being measured when a system effectiveness study is inititated in an organization. Previousexaminationsof this relationship(see Zmud [29] for a review of this literature), however,sufferfromthe maincriticismsoffered by Ives, et al., [1] in their use of perceived effectiveness measures. The relationship between MISusage and user satisfaction that has been reportedin the past reflects an inadequatetreatment of the perceived measures. Hence, what we do know about this relationshipis, at best, superficial. It is not the purpose of this article to examine previous research pertaining to MIS effectiveness per se. Excellentreviews may be found in Ives, et al., and Zmud. However, we will

examine that research that has specificallyconcerned itself with this relationship. The extensive system implementationstudies reportedby Lucas [18, 19] have examined the relationship between system use and some measures of user satisfaction with the system. Generallypositive associations are reported.In his study of information systems in ten food processing firms, Schewe [24] reported a lack of significant association between certain user attitudes (now considered important components of user satisfaction with an MIS)and use of the system. Maish [21] reportedpositive associations between usage and some attitudes pertainingto user satisfaction in his study of information systems in federal agencies. Swanson [27] reported a similar association in the case of an MISused by a manufacturer comof plex electronic equipment. Robey [23] reported a positive association between system usage and user perceived worth of a system in his sales force and theiruse of study of an industrial an MIS.The results presented by Ginzberg[7] suggest some positive (albeitweak) association between the two outcome measures. Inall of the studies reportedabove, satisfaction withthe system is eithercapturedin the broader context of obtaining user attitudes about the system or through the use of a set of items thoughtto be relevantto user satisfaction.What was lackingin these attemptswas a comprehensive understanding of what constituted user satisfactionwitha system. This resultedin using a single index to represent satisfaction [23, 27], or a series of single item measures pertaining to satisfaction [19, 24], each treated independently. Unless we have appropriate satisfaction measures (there really has never been a problem withobtainingusage measures), we willnot adequately understand this relationship.Such an understanding providebetterguidance to will researchers in drawing behavioral interpretations of user perceived satisfaction data. Furthis is ther,understanding relationship an important step in bridging a gap for practitioners between research results and theirobservations of user behaviorin system environments. The objective of this article then is to examine behavioral and user perceived effectiveness

244 MISQuarterly/September 1985

Measuresof System Effectiveness

empirically in the context of a particular class of information systems in order to investigate the critical relationships between the two measures. The importance of doing this is demonstrated by the application of both measures as dependent variables in examining the suitability of the characteristics of information systems in a number of organizations. The research reported in this article was conducted as part of a larger study on the implementation of computerized planning models in large organizations.

(intelligence, design, and choice phases) was an appropriate perspective of the manner in which users evaluate their experiences with the system. Using a factor analytic approach to empirically test their claim, they postulated that there are five key underlying dimensions that make up overall user satisfaction: report content, report form, assistance in problem solving, input procedures, and system stability. Table 1 shows the correspondence between each of the five dimensions and the problem-solving paradigm. Jenkins and Ricketts outlined the nature of the issues to be addressed under each of the five dimensions as follows: Report Content: Accuracy of report contents Relevance of report contents Adequacy of report contents Understandability of report contents Report Form: Quality of format Timeliness of report Mode of presentation Sequencing of information Problem Solving: Usefulness for identifying and defining problems Usefulness for selecting among alternatives Power of the modeling language employed Flexibility of the modeling language involved Input Procedures: Ease of understanding input procedures Comprehensiveness of documentation

Methodology
The initial step in the research was to choose a measurement of user perceived effectiveness of the system. While a number of approaches to examining user perceived effectiveness have been proposed, the approach reported by Jenkins and Ricketts [14] is superior. It is one of the few (if not only) approaches that develops an instrument to measure user satisfaction that is well grounded in a widely accepted theoretical model. Despite the fact that Ives, etal., [13] point out shortcomings to this approach, the procedure adopted by Jenkins and Ricketts in developing and testing a satisfaction instrument provides a firm basis for researchers interested in this issue. The Jenkins and Ricketts framework (described below) is adapted to the unique needs of our situation to arrive at user perceived measures of system effectiveness. In order to uncover the underlying factors that constituted overall satisfaction with the system, Jenkins and Ricketts hypothesized that Simon's [25] paradigm for the problem-solving process

Table 1.

Dimensions of Perceived Effectiveness

and the Problem Solving Paradigm Dimensions

Problem Solving Paradigm Intelligence Design Choice

Perceived Effectiveness Input Procedures

Systems Stability Problem Solving Report Contents Report Form

Adapted from Jenkins, A.M. and Ricketts, J.A. "Development of an Instrument to Measure User Satisfaction with Management Information Systems." Unpublished paper, Indiana University, Bloomington, Indiana, 1979.

MIS Quarterly/September 1985

245

Measuresof System Effectiveness

Interfacinglanguages Editorcharacteristics Systems stability: Response time Errorproneness of Reliability the system of Accessibility/availability the system This frameworkis used here for the measurement of user perceived effectiveness of the system.

Appropriate behavioral measures used to monitor apparent system effectiveness were identified. They included; number of formal reports generated, connect time with the system, and number of interactionsessions. It was also noted that the issues identifiedfor the measurement of perceived effectiveness were, in fact, relevant. A review of the issues with these people resulted in minorchanges being made to the instrument while retainingthe basic referredto earlier. dimensionality The next phase of the study involvedadministerto ing the revised instrument a largersample for the purpose of data analysis. Firms with corporate planningstaffs were identifiedusing the directoryof the NorthAmericanSociety for Corporate Planning.The person listed in the directorywas contacted, the projectwas explainedby the researcher, and the cooperationof the firm was solicited. A total of 37 firmsagreed to par-

DataCollection
The intentionof this research was to examine situations in the field where model-based computerizedsystems were in use. These included systems that utilized a predefined model of a particular problemand also those that provided an environment for model building. The idea was to study those systems that catered to corporate-planning related activities (those devoted to addressing strategic concerns [1]) transactionproratherthan production-oriented the Furthermore, focus of the cessing systems. study was on large firms where there existed either a corporate planning staff which used computer-basedmodelingsystems, or a formal systems department which provided technical supportfor the use of such systems. A pilot study was conducted in two large organizationswhere it was knownthat the planning staff had been using computer-basedplanning models in the past. Those people responsible for the functioningof the system were interof viewed to obtain:(1) a general understanding the environmentin which the system operated, behavorialmeasures of (2) inputfor appropriate system use, and (3) to have them examine the preliminary version of the perceived effectiveness instrumentfor content validation. From this phase of the study a few tentative observationswere made. In large organizations modeling systems were the responsibilityof a planningdepartment,a systems department,or an ad hoc groupcreated specificallyforthat purpose. Insome cases, the end user of the system rarely interacted with the system directly; instead, a technical intermediary performedtasks as defined by the user.

Table 2. Site Characteristics User Access Mode: Directaccess .....................15 Indirectaccess (through an intermediary)..................

14

Type of System Used: 16 Developed in-house ............... Purchased froman outside firm ......11 Leased from an outside firm......... 2 Application Type1: Financialplanning................. Corporateplanning ................13 Marketing planning ................ Productionplanning ............... User Job Titles: Manager,CorporatePlanning........11 Director,Strategic Planning ......... Manager,PlanningSystems ......... Assistant VP, Planning ............. PlanningAnalyst .................. Senior Project Manager ............ DivisionPlanner ................. Director,MIS ..................... 16 9 3

8 3 2 2 1 1 1

1Manysites reported the use of the system for multiple applications; hence the total in this category adds up to a number greater than the number of sites involved in the study.

246 MISQuarterly/September 1985

Measuresof System Effectiveness

ticipate in the study. These firms were selected after it was determined that they were involved in developing and using modeling systems on a regular basis, and that they were willing to participate in a study of this nature. The instruments were then mailed to the contact person. While the effectiveness instrument was aimed at the end user, other measures relevant to the study were obtained from a technical support person affiliated with the system. Usable responses were obtained from 29 firms representing a response rate of 78/o. Table 2 shows some characteristics of the sites involved in the study.

focusing on an unstructured problem for definition and consequent solution generation. Those with relatively shorter interaction sessions tend to use the system for assistance in answering specific questions that do not extensively test the output form capabilities of the system. Substantive issues are brought into prominence in determining if a user is a heavy or a light user in comparison to others in the firm. The ability of the system to help the user structure a problem and seek out viable alternative solutions, coupled with the accuracy and understandability of the outputs it generates, appear to be strong motivators for system use. This is evidenced by the positive and significant correlation between USETYPE and OCONT, and USETYPE and PSOLV. System use to aid in decision making is a function of its ability to assist in problem structuring and search for solutions, as much as its ability to provide accurate information. An especially interesting observation from this table is the fact that the frequency with which the system is used is not significantly correlated with any of the perceived dimensions of effectiveness. This is in direct contradiction to findings mentioned earlier in this article in the context of MIS use. It also lends credence to Ginzberg's [8] position that actual use of the system may not always be an indicator of system worth. The absence of pervasive association between actual use and perceived system worth is not surprising in the context of this research. This is because the very philosophy behind modeling systems is that they are not used on a production basis, but in fact are used sporadically as the situation dictates. The results caution us that when we use a measure of system effectiveness, especially in the context of systems that support decision making that is more strategic in nature, we may be wise to assess the role of behavioral measures separately from perceived measures of system effectiveness.

Data Analysis, Results and Interpretation


Table 3 provides a list and explanations of the measures used in the data analysis. In order to examine the relationships of interest, an associative analysis was performed between the two types of variables involved in effectiveness measurement. Table 4 shows the results of the analysis. Examination of the significant correlation coefficients in Table 4 reveals some interesting phenomena. The correlation between TPSESS and OFORM is significant and inverse. This indicates that as far as benefits from the system are concerned, it appears that users who spend longer time periods at each session with the system tend to see the system as not contributing favorably to their operations. It also appears that users spending a large amount of time at an interaction session are more inclined to look for assistance in problem solving tasks. While they may find the system to be quite useful in providing such assistance, (evidenced by the positive correlation between TPSESS and PSOLV) such extended interaction sessions may not necessarily be directed toward the most pressing tasks at hand (retrieving information in a specific output form) and may come at the expense of equally, if not more important activities. The primary difference between users with long interaction sessions and those with shorter ones may be that the former tend to use the system for

Research Implications
The previous section demonstrated the importance of examining behavioral and perceived

MIS Quarterly/September 1985

247

Measures SystemEffectiveness of Table 3. Measures Used in the Study Variable Perceived Measures OutputContents (OCONT) Qualityof the contents of system output.(Outputsprovided by the system are relevant to the decisions I make.) Qualityof the formin which the outputis received. (Outin puts contain information the sequence that I findto be useful.) Qualityof the system as an aid in problemsolving. (The system helps me select from among many alternative solutions.) Proceduresfor data input.(It to is difficult understandthe inputproceduresfor using the system.) Operational stabilityof the system. (System has been up and runningwhenever I have needed to use it.) Indexof four items, each measured on a 5-point Likert scale (Cronbach's alpha= 0.81). Indexof four items, each measured on a 5-point Likert scale (Cronbach's alpha= 0.67). Indexof four items, each measured on a 5-point Likert scale (Cronbach's alpha= 0.83). Indexof four items, each measured on a 5-point scale (Cronbach's Likert alpha= 0.79). Indexof five items, each measured on a 5-point Likert scale (Cronbach's alpha=0.61).

Explanation

Type

OutputForm (OFORM)

Problem-Solving Capabilities (PSOLV)

InputProcedures (INPUT)

System Stability (STABL)

Behavioral Measures Frequencyof Use (USEFREQ) Time per Session (TPSESS) Numberof Reports (NREPS) User Type (USETYPE) Frequencyof use of the system Averageconnect time per access Averagenumberof formal reports/documents generated using the system output Type of user, relativeto other users in the firm Numberof accesses per month Minutesof connect time Average numberof reports per month Ordinalmeasure (light, average, heavy)

248 MISQuarterly/September 1985

Measuresof System Effectiveness

Table 4.

Associations

Between Perceived and Behavioral Measures of Effectiveness Perceived Measures OCONT USEFREQ 0.040 -.013 0.221 0.395* OFORM -.173 -.402 * 0.194 -.053 PSOLV 0.218 0.382* -.237 0.620* * * INPUT 0.086 -.311 0.052 0.014 STABL 0.198 -.283 -.021 0.045

Behavioral Measures

TPSESS NREPS USETYPE

Kendall's tau

*p<0.10 **p<0.05 ** p<0.01

effectiveness separately in the context of MIS research. In this section, we examine the fit between the technical sophistication of a computer-based modeling (its system capabilities) and user needs, and the impact of such a fit on system effectiveness. Typically, the kind of promotion engaged in by vendors of commercial software packages centers around the features of the system or the kinds of analyses the system can perform. An example of this is the promotional booklet distributed by EPS Inc. entitled "Selecting and Evaluating a DSS" [5] in which a set of over 100 features is presented. The mere presence (or absence) of a particular technical feature is a poor indication of the future effectiveness of a system. Two other factors need to be considered. The first is whether or not these features are used, if in fact they do exist. Ifa feature is incorporated into a particular system and is not used, the assumption can be made that such a system is equally sophisticated as one that does not incorporate that feature. These two systems may be treated as equal as far as technical sophistication for that particular feature is concerned. The second is whether or not a particular feature is desired, if it does not exist in the system in its present form. If such is the case, then this system is not as sophisticated as one where the particular feature does not exist and is also not desired. The notion of comparability, then, is the key issue in this user-based perspective of technical sophistication. If we were to classify a set of

systems according to the existence of a feature, extent of use of a feature, and the extent to which a feature is desired, we can ordinally rank them to indicate the degree of technical sophistication each one possesses. Information concerning technical sophistication of each of the systems studied was obtained by asking the system support person a set of questions concerning some specific features that Table 5. Common Features of Modeling Systems, 1. "What if" capability 2. Flexible report writer 3. Hierarchical consolidation 4. Goal seeking capability 5. Forecasting capability 6. Use of internal databases 7. Use of external databases 8. Risk analysis 9. Seasonal adjustment 10. Graphics 11. Linear programming 12. Simultaneous system of equations 13. Statistical analyses 14. Ad hoc calculations 15. Modeling language 16. Linkages to other packages 17. Linkages to other languages 18. Equation reordering 19. Security provisions 20. Text processing
'No ranking is implied by this list.

MIS Quarterly/September 1985

249

of Measures SystemEffectiveness

such systems typically possess. The specific features that were selected represented the twenty most widely publicizedfeatures of such systems based on an examinationof consultant reports and sales publications of many firms marketingthese products. For each of the features, the respondent was asked the followingthree questions: 1. Does your system possess this feature? 2. If it does, is it used? 3. Is the feature needed? Table6 shows how the differentcombinationsof use availability, and needs are scored. Two sophisticationscores are computed based on the needs-availability the use-availability and interaction.The scoring scheme is designed to measure the extent of fitthat exists between the needs of the situationand the technical capabilities of the system. Srinivasan and Kaiser [27] interreportedthe use of the needs-availability action in determining the user information requirements in a laboratoryexperiment. The results revealedthat one needs to consider both the importance and availabilityof information during the requirements analysis stage of system design. Table 6. Scoring for Technical Sophistication TSOPH1 NEED No Yes Availability No Don't Know 1 3 3 1 Some Large Extent Extent 2 3 1 1

Consider the technical sophistication 1 score (TSOPH1). TOSPH1 is an indication of the degree of fit between the existing and nonexisting features, and the extent to which each feature is perceived as being needed for decision-making tasks. A good fit between needs and availability achieved when the system has is a feature and it is needed to a large extent. Similarly,a good fit is also achieved when a system does not have a feature and it is not needed. These two good fit situations are given a score of 3. A score of 1 is given to a situation where the qualityof the fit is the poorest. These include situations where the feature does not exist and it is even marginally needed. Lastly,an intermediatescore of 2 is given to the situation where the featureexists and it is needed to some extent. The technical sophistication2 score (TSOPH2) is a measure of sophisticationthat involves the extent to whichan existent featureis actuallyused in decision-makingtasks. If a feature exists and it is used to a large extent, a score of 3 is assigned to indicatea good fit, and if it is not used, a score of one is assigned to indicatea poor fit. An intermediatescore of 2 is assigned to the situationwhere the feature exists and it is used to a moderate extent. At the most rudimentary level we can compare the absolute number of features a particular system has (a frequency count) and then see if the number of features has a bearing on the effectiveness of the system. Table 7 shows the association between system effectiveness and technical sophistication measured in terms of the absolute number of features possessed by the system. This measure of technical sophistication is positively related to USEFREQand negatively related to USETYPEand OCONT.An increase in the number of features results in people becoming relatively light users, but they also tend to access the system more on an absolute basis. Inthe absence of TPSESS being involved in any significant associations, it appears that users tend to access the system morefrequently out of curiosityabout the features. However, a large number of features leads to a perception that other users may actually be using the system to do useful work. The argument is for-

TSOPH2 USE No Availability Some Large Extent Extent 2 3

Yes

250 MISQuarterly/September 1985

Measures of System Effectiveness

Table 7. Association Between Technical Sophistication (number of features) and System Effectiveness Effectiveness Measure USEFREQ TPSESS NREPS USETYPE OCONT OFORM PSOLV INPUT STABL Kendall's tau *p<0.10 **p<0.05 Technical Sophistication 0.367* -.017 0.080 -.443* * -.533 * 0.063 -.040 0.120 0.108

sophistication along this dimension also leads to a decreased number of formal reports that are prepared per month using the system output. This is no indication of whether their decisionmaking quality has gone up or down, but instead points to the dangers of using behavioral measures as indicators of system effectiveness. TSOPH1 is also positively related to the problem-solving dimension of system effectiveness. This result supports the hypothesis of interest in this section. A good match between system feature needs and availability enables users to extract the substantive content of the information it provides. The impact is really felt where the information is used for addressing alternative solutions to the problem at hand. The behavioral measures of effectiveness are not impacted significantly by TSOPH2; however, the output content dimension of is positively and perceived effectiveness significantly related to TSOPH2. This result supports the point of interest here - the importance of user-based measures of technical sophistication. Rather than merely increasing the number of features a system has, it is important to ascertain what features are going to specifically contribute to the users' tasks and then seek to achieve a fit between those needs and the system.

tified by the negative association between this measure of sophistication and OCONT. It appears that users are unable to effectively use an increasing number of features. This suggests that there may be some benefit to keeping the number of features at an optimal but limited level that is not too high for the users to use the system effectively. Examining the number of features per se may not be a relevant measure of technical sophistication from a decision-making perspective. The results from the previous table even seem to indicate that having more features may be of dubious value to the user. However, we need to examine the need for each feature, and the extent to which the feature was actually used, relate to this issue. Accordingly, the sophistication scores based on needs, availability, and use were computed for each system. Table 8 examines the relationships between the two sophistication scores and the various effectiveness measures. Some interesting observations can be made from the results in Table 8. The time spent per interaction session is positively associated with TSOPH1. A good match between needs and availability leads users to spend more time per interaction session, indicating that they are more comfortable in their knowledge about what the system can and cannot do. Increased

Table 8. Associations Between Technical Sophistication (modified) and System Effectiveness Effectiveness Measure USEFREQ TPSESS NREPS USETYPE OCONT OFORM PSOLV INPUT STABL TSOPH1 0.103 0.410* -.449 * 0.264 -.087 -.269 0.349* 0.135 0.143 TSOPH2 0.304 -.021 -.068 0.025 0.363* 0.141 0.088 0.079 0.199

Kendall's tau *p<0.10 **p<0.05

MIS Quarterly/September 1985

251

Measures SystemEffectiveness of

Concluding Remarks
This articleattemptsto make a case for a more thoughtful consideration of MIS effectiveness measures in research. Specifically,by examining the relationships between behavioral and perceived measures of MIS effectiveness in a modeling applicationcontext, we have emphasized the fact that the two are not always positivelyassociated with each other as is suggested by much of the MISempiricalliterature. the By understanding relationshipsbetween the various specific dimensions of perceived effectiveness and commonly accepted behavioral will measures, researchers and practitioners be able to interpret data pertaining to such measures more accurately. The second partof the articleappliedthe above notion in the examinationof the fit between the technical sophistication of a system and its effectiveness. Instead of measuring technical sophistication of a system in terms of system features alone, we suggest a scoring scheme based on user needs and the existence of certain commonly encountered features. The significantassociations between sophistication measures thus obtained, and the various measures of system effectiveness, were both positiveand negative. These results again point out the need to isolate the two types of measures. The results shown in this articlehave strong implications for both researchers and practitioners. Researchers have to be extremely cautious about using surrogate measures of system effectiveness. Whilein certainclasses of systems strong positive association may exist between the two types of measures, in other classes of systems this relationship may be nonexistent. Researcherswillhave to clearlyspecify what the exact nature of the dependent variables are. System use and system effectiveness may be indicating two entirely different phenomena. are The implicationsfor practitioners also quite strong.They have to realizethat a lack of strong behavioralindicationsof system use may not be a negative outcome. In fact, as these results have shown, there may very well exist an underlyingflurryof problem solving activities.

Typicalsystem auditingapproaches must take this result into account while making system resource allocationdecisions.

References
[1] Anthony, R.N. Planning and Control for Systems:A Framework Analysis.Harvard University,Boston, Massachusetts, 1965. [2] Bailey, J.E. and Pearson, S.W. "Developand mentof a ToolforMeasuring Analyzing User Satisfaction," Management Computer Science, Volume29, Number5, May1983, pp. 530-545. [3] Culnan, M.J. "ChauffeuredVersus End Databases:The UserAccess to Commercial Effectsof Taskand Individual Differences," Volume7, Number1, March MISQuarterly, 1982, pp. 55-67. P. Ein-Dor, and Segev, E. "Organizational [4] InContextandthe Success of Management formation Science, Systems," Management Volume 24, Number 10, June 1978, pp. 1064-1077. a [5] EPS, Inc."Selectingand Evaluating Decision SupportSystem,"promotion brochure, 1981. [6] Epstein, B.J. and King,W.R. "An ExperimentalStudy of the Value of Information," Volume10, Number September 3, OMEGA, 1982, pp. 249-258. [7] Ginzberg,M.J."EarlyDiagnosisof MISImResultsand Failure: Promising plementation Unanswered Questions," Management Science, Volume27, Number4, April1981, pp. 459-478. [8] Ginzberg, M.J. "Finding an Adequate Measure of OR/MS Effectiveness," Interfaces, Volume8, Number4, August 1978, pp. 59-62. [9] Hamilton, S. and Chervany, N.L. IS "Evaluating Effectiveness Part1:Comparing Evaluation Approaches," MIS Volume5, Number3, September Quarterly, 1981, pp. 55-69. [10] Hamilton, S. and Chervany, N.L. ComIS "Evaluating Effectiveness PartII: MIS Evaluation Viewpoints," Quarterparing 4, ly,Volume5, Number December1981,pp. 79-86. A. of [11] Guthrie, "Attitudes the User Managers TowardsMISs,"ManagementInformatics,

1985 252 MISQuarterly/September

Measures SystemEffectiveness of

Volume 3, Number 5, October 1974, pp. 221-232.. [12] Ives, B. and Olson, M.H."UserInvolvement and MISSuccess: A Reviewof Research," ManagementScience, Volume30, Number 5, May 1984, pp. 586-603. [13] Ives, B., Olson, M. and Baroudi,J. "The SatisfacMeasurementof User Information of tion,"Communications theACM,Volume 26, Number10, October1983, pp. 785-793. [14] Jenkins, A.M.and Ricketts,J.A. "Development of an Instrumentto Measure User Satisfactionwith ManagementInformation Systems," unpublisheddiscussion paper, Indiana University,Bloomington,Indiana, 1979. [15] King,J.L.and Schrems, E.L."Cost-Benefits and Analysisin IS Development Operation," Computing Surveys,Volume10, Number1, March1978, pp. 19-34. [16] King,W.R.and Rodriguez,J.I. "Evaluating Management Information Systems," MIS Volume2, Number3, September Quarterly, 1978, pp. 43-51. [17] Kleijnen, J.P.C. Computers and Profits: Financial Benefitsof Information, Quantifying Addison-Wesley, Reading, Massachusetts, 1980. of [18] Lucas, H.C. The Implementation ComAssociationof puterBased Models,National Accountants, New York, New York, 1976. [19] Lucas, H.C. "Performanceand the Use of a Management Information System," Science, Volume3, Number 4, Management April1975, pp. 908-919. TowardHis [20] Maish,A.M."AUser's Behavior Volume3, Number1, MIS,"MISQuarterly, March1979, pp. 39-52. [21] Marschak, J. and Radner, R. Economic of Press, New Theory Teams,Yale University Haven, Connecticut, 1972. Hierar[22] Nigam,R. and Hong, S. "Analytical chy Process Applied to the Evaluationof Financial Software," Modeling Proceedings of the DSS-81Conference, Atlanta, Georgia, 1981. [23] Robey, D. "User Attitudesand MISUse," Academy of ManagementJournal,Volume 22, Number 3, September 1979, pp. 527-538. [24] Schewe, C.D. "The ManagementInformation Systems User: An Exploratory Behavioral Analysis,"Academyof Manage-

[25] [26]

[27]

[28]

[29]

ment Journal, Volume 19, Number 3, September 1976, pp. 577-589. Simon, H.A. TheNew Science of Management Decision, Harperand Brothers,New York,New York, 1960. Srinivasan,A. and Kaiser,K.M."The Role of Information Accessibility in Information RequirementsAnalysis," Systems, Objectives, Solutions, Volume 4, Number 4, November 1984, pp. 201-210. Swanson, E.B. "ManagementInformation Systems: Appreciationand Involvement," ManagementScience, Volume21, Number 2, October 1974, pp. 178-188. Welsch, G.M."Successful Implementation of Decision Support Systems: PreInstallation Factors,ServiceCharacteristics, and the Role of the Information Transfer Ph.D.dissertation, Specialist,"unpublished Northwestern Evanston,Illinois, University, 1980. Differenceand MIS Zmud,R.W."Individual Success: A Review of the Empirical Literature," ManagementScience, Volume 25, Number10, October1979, pp. 966-979.

About the Author


Ananth Srinivasan is Assistant Professor of Operations and Systems Management in the School of Business, Indiana University. He received his Ph.D in MISfromthe University of in Pittsburgh 1983. His research interests are in the areas of MISperformance,data modeling, and the managementof MIS.Dr.Srinivasanhas published articles in Academy of Management Journal, Systems, Objectives, Solutions, and Applicationsof Management Science. He is a member of SIM,ACM,AIDS,and TIMS.

MISQuarterly/September 1985 253

Anda mungkin juga menyukai