Anda di halaman 1dari 25

1

Evaluating Overall Online Service Quality and Customer Satisfaction of EDUGATE Portal at King Saud University
Abstract Organisations are interested in deploying Quality Function as an essential activity towards evaluating their web sites. Since, Quality Function Deployment (QFD) as a design technique primarily attempts to deliver the voice of the customers throughout every single planning and design activity, voice of the customer has been given importance in this research. In spite of the availability of many online service quality evaluation models in the research literature, majority have been reported by establishing it with the lack of proving coherence among the hypothesis testing model and underlying conceptual model. So in order to provide such referential adequacy especially at the level of matured research (as against academic research) a portal based online service quality instrument called WePoServQual has been proposed. WePoServQual Instrument is developed based on Parasuraman et al. (1988) Service Quality Model and tested for its consistency as well its measurement and evaluation capacity using an educational portal called EDUGATE of King Saud University, Riyadh in Saudi Arabia. For this purpose, an online survey has been posted to collect the customer service quality evaluation data. 48 subjects of customer responses are collected within King Saud University community, especially covering only the Deanships of College of Administrative Sciences and Computer Information Sciences, e-Transactions and Communications (IT), Student Registration Deanships. The results indicated that WePoServQual is an effective instrument to evaluate the overall service quality. However, the relationship between service quality and customer satisfaction is found questionable.

Keywords: E-Service Quality, Edugate, Evaluating Educational Portals, Evaluating Portal Service Quality, King Saud University, Quality Function Deployment.

1. Introduction As Sahut and Kucerova (2003) elucidated, quality function deployment (QFD) is a distinguished service design technique primarily attempts to deliver the voice of the customers throughout every single planning and design activity. Adopting the current trend of customer-centric approach into consideration, our QFD evaluation approach results in formulating a set of service quality dimensions and items disclosed based on the service quality gap theory (Parasuraman et al., 1985; Parasuraman et al., 1991). It also helps to provide decision support to project managers of the portal development to improve portal service quality further towards better customer satisfaction. Yang and Fang (2004) mentioned that

08-04-2013

2 listening to customer voices is the initial step in planning service quality improvement endeavours. In turn, identifying customer perceptions on service quality and their satisfaction or dis-satisfaction can provide a framework of reference for online service quality providers to self-evaluate the provision of service performance. Analyzing the voice of customers and building a transformation framework bringing it into practice are the leading steps of the methodology and become most important steps in regulating the level of quality of portal services (Sahut and Kucerova, 2003).

Zeithaml is considered as one of the pioneers who introduced the concept of electronic service quality (e-SQ) and who examined the service quality of web sites as well as their role in the service quality delivery to customers, defined e-SQ and web site service quality as the extent to which web site facilitates efficient and effective service delivery (Zeithaml, 2002; Al-Mushasha and Hassan, 2011).

Web portals are increasingly being used for various tasks in different domains. Since web portals can be used to provide information, services and applications to the customer, study on their service quality is an important prerequisite for their successful implementation and performance. ISO has mentioned internal and external quality perspectives in the quality (Herrera et al. 2010) and can also be made applicable to study service quality of educational web portal.

Online Service quality is a much studied concept. There is considerable evidence that user expectations and perceptions of self-service and online service quality differ. However, improvement in online service quality can be suggested by making the web site more self-service oriented. Universities are in forefront to adopt this view of making a web site more self-service oriented by adopting it through implementing web portals for a wide range of online transaction for the use by a wide range of stake holders. Stakeholders in a university include students, faculty, staff, administrators, the government and the wider society in which the university operates (Sahney et al. 2004). A research study on service quality of web portals is proposed to be conducted in the King Saud University environment. Our proposed study can offer a number of insights to service quality providers/organisations such as improving the relationship between faculty, staff and students, how customers are inclining towards the adoption of new technology in the form of satisfying their expectations. A considerable negative effect can be predictable if organiations are slow in adopting a customer-centric view point and persists in providing interfaces that are inconsistent to customer perceptions (Tate, Evermann, Hope and Barnes, 2009).

08-04-2013

3 1.1 King Saud University King Saud University is the premier institution of higher education in the city of Riyadh in Kingdom of Saudi Arabia. It was established in the year 1957 to enhance the knowledge and learning capabilities towards nations growth and well-being. It is playing a contributory role in becoming a source of skilled professionals to meet the needs of nation in various fields of academics such as medicine, engineering, agriculture, science, humanities, languages and Islamic culture. In addition to its role in teaching and research, it practices its vital functions towards health care and private sector development (King Saud University Web site).

1.2 EDUGATE -Online Academic Portal EDUGATE is an Online Academic Portal of King Saud University (Cap 252, 2010). EDUGATE is a comprehensive, user-friendly system that enables students to use many of service relevant to their course registrations, modify, confirm and print their schedules. Through EDUGATE, students can monitor their academic progress, view transcripts/grades and more. They can also provide their feedback and evaluation about their instructors (Student_Guide V5, 2012). Instructor can monitor the academic progress of their students. They can mark absences and award marks for the assignments performed by the students, edit their profile etc. (Cap 252, 2010). Although registration of the course system is fully automated, overriding certain regulations and exceptions require the intervention and assistance of the departmental officials (Student_Guide V5, 2012).

Edugate services are categorized mainly into five modules: Academic Calendar, Courses Schedule, Major Plans, Graduation Document and Student Reports Validation. The username and password are required to be obtained initially from e-communications and transactions Deanship once a person is associated with the King Saud University either as a student or staff and hence the user types are categorized as student and staff.

Some of the hosting-server side statistics of EDUGATE portal of King Saud University are given as follows: EDUGATE is ranked #8, 367 on the World Wide Web. Usually the lower the rank, the popular the web site is. Though the web site is estimated to earn at least $476 USD per day from advertising revenues to an overall value of $347,951 USD (Statmyweb, 2010), siteglimpse.com web site informs that the daily ad revnue and estimated revenue are not applicable to the web site. The average page load time is found as 0 seconds. PageRank is given as 5/10 with a total of 104, 750 unique visitors per day and 398.051

08-04-2013

4 pageviews. The web portal address is www.edugate.ksu.edu.sa. The server is located at Dallas, Texas, United States with a SEO score of 61.9% (Statmyweb, 2010).

2. Aim and Objectives of the Research The aim of the research is to evaluate the King Saud University customer satisfaction on the provided EDUGATE portal web site using service quality as an instrument. Customer satisfaction is defined as the levels of service quality performances that meets users expectations (Wang and Shieh, 2006, p. 197). Service quality is the difference between customers expectations and perceptions (Parasuraman et al., 1985). Satisfaction is an important measure of service quality (Filiz, 2007; Somaratna, Peiris and Jayasundara, 2010) or it could be vice versa. So the relationship between service quality and customer satisfaction is a complex one. Service Quality is defined as a component of user satisfaction (Somaratna, Peiris and Jayasundara, 2010). Somaratna, Peiris and Jayasundara, (2010) presented an argument on relationship between the satisfaction and performance. According to it, the user satisfaction may or may not be directly related to the performance of the web portal services on a specific occasion, since the user satisfaction is defined as the emotional reaction to a specific transaction or service encounter. Customers can give a negative answer to a query as a result of dissatisfaction because of an upsetting or angry encounter to a positively influencing service quality feature. Conversely, customer could give a positive answer to a query of feeling satisfied as a result of feeling pleasant encounters. Though the relationship and nature of these customer evaluations remains unclear through service quality and satisfaction, however expectations and perception are considered as key instruments in the empirical studies of service quality and customer satisfaction, which are considered two big research paradigms and introduced as synonyms within the service business and adopted by many researcher especially while evaluating the overall service quality and overall customer satisfaction (Somaratna, Peiris and Jayasundara, 2010). Five perspectives of quality evaluation mentioned by Lovelock and Wirtz (2007, p. 418) are: (i) transactional/encounter based (ii) product-based (iii) user based (iv) manufacturing based and (v) value based (Kabir and Carlsson, 2010).

Since it is posited (Ziethaml et al., 1990; Altman and Hernon, 1998) that customer or the user is the best judge of service quality, User based evaluation approach has been adopted in this research of examining the level of service quality through evaluating the EDUGATE web portal.

The Service Gap Analysis model (of Parasuraman et al., 1985) presents five gaps, in which first four gaps are identified as functions of the way in which the service is delivered from the service provider to the

08-04-2013

5 customer, while the fifth gap is connected to the customer. It is important for a service organisation to define the level of quality at which the web portal can be operated (Kabir and Carlsson, 2010).

In this research, we classified total users of EDUGATE portal into two categories: (i) Service Providing Users and Service Requesting Users. Service Providing Users are otherwise called as internal customers and Service Requesting Users as the external customers (AlSudairi and Vasista 2013) to the organisation. We assumed that the perceptions of Service Providers of EDUGATE Portal (e.g. staff from Deanship of e-Transactions and Communication, Deanship of Admissions and Registration, Deanship of Admission and Registration and Student Affairs in the province south of Riyadh and Deanship of Graduate Studies) are considered as the expectations of the (internal) customers or providers who define the level of quality of the portal initially, while the perceptions of the service requesters such as rest of faculty, staff and students of King Saud University as the actual perceptions of (external) customers. Thus our service quality is defined as the difference between the perceptions of internal customers and the perceptions of the external customers.

In cinch, the aim of the research is to evaluate the overall customer satisfaction and the objectives are set as evaluation of service quality and also the evaluation of the relationship existing between overall service quality and customer satisfaction based on the evaluation of user based service quality measurement approach.

3. Research Questions 1) To what extent the EDUGATE Portal is satisfying the needs of faculty, staff and students? 2) What do users of Edugate Portal perceive the key quality dimensional attributes of web based portal services? 3) Which attributes of web service quality do users perceive to be relatively important? 4) Is there a statistically significant predictive relationship between portal service quality and user satisfaction?

4. Purpose of the Study Purpose of this study is to evaluate the Electronic Service Quality of EDUGATE Portal towards determining the customer satisfaction and to suggest the service providers about improving electronic services quality based on the proposed service quality criteria. The service quality criterion is the proposition of a portal based e-service quality instrument having set of proposed service quality dimensions. It is now required to confirm these proposed e-service quality dimensions based on the

08-04-2013

6 empirical or quantitative test results. The research is to statistically test the developed portal service quality instrument (WePoServQual) that would determine the overall service quality and overall satisfaction in terms of identifying whether online service provider can influence it customers use of portal based web site through a set of web site design features that match with the service quality dimensions derived from service quality gap model.

5. Research Methodology for Evaluating EDUGATE Portal The research methodology involves five sequential stages to achieve the purpose and research aim. The five stages undertaken are: Conceptual model development, dimensions and item generation, content validation with extensive literature review, exploratory study and confirmatory study (Tojib, Sugianto and Sendjaya, 2008). Though z-score or t-score and p-values can help in basically finding the effectiveness of the use of the proposed WePoServQual instrument, confirmatory factor analysis is required to confirm the proposed dimensions viz. Accessibility, Responsiveness, Usability, Functional Usefulness, Safety, Convenience and Realization.

A Theoretical Framework is proposed based on the strategy developed by (AlSudairi, 2012). A web portal service quality instrument is proposed that refers to different reviewed service quality models (Seth, Deshmuh and Vrat, 2005) such as Gronroos (1982), Parasuraman et al. (1985), Spreng and Mackoy (1996), Teas (1993) Berkley and Gupta (1994)s IT alignment model and Davis et al. (1989)s Technology Acceptance Model. WePoEServQual is an instrument for measuring web portal e-service quality. It examines Seven (7) dimensions viz. (i) Accessibility (ii) Responsiveness (iii) Functional Usefulness (iv) Usability (v) Safety (vi) Convenience and (vii) Realization with their corresponding (30) items. The seven dimensions are fundamentally derived from the Service Quality Gap Model proposed by Parasuraman et al. (1985), which identified four service provider side gaps viz. (i) Service Information Gap (ii) Service Communications Gap (iii) Service Standards Gap and (iv) Service Performance Gap that is equal to customer side service quality gap.

Antecedents of Online Service Quality Online services provision exercise considerable latitude in designing their online offerings and web site interactivity to enable or subvert various features that match the basic service quality gap factors (Parasuraman et al. 1985). Further, the derived dimensions and items are proposed based on correlations elicited from extensive research literature. A tabular form containing the dimensions and corresponding references to the authors and their research literature is given in Table 1. 08-04-2013

7
(40 references are required to be typed from the table below)

Table 1. Proposed Portal Service Quality Evaluating Dimensional Features and Supporting Authors Sl. No. 1. E-Service Quality Features supporting by Authors Dimensions Accessibility Parasuraman et al. (1985), Berthon, Pitt and Watson (1996), Griffth and Krampf (1998), Johnston (1995), Zeithaml et al. (2001), Yang and Jun (2002), Yang et al. (2005), Kuo et al. (2005) 2. Responsiveness Parasuraman et al. (1985), Griffith and Krampf (1998), Wilcox (1999), Johnston (1995), Loiacono et al. (2002), Ziethaml et al. (2002), Galletta, Henry, McKoy and Polak (2003); Abdullah (2006), Tate et al. (2007), Hoffman and Novak (2009) 3. Functional Usefulness 4. Usability Davis (1998), Johnston (1995), Wooldridge, Jennings and Kinny (2000); Tojib, Sugianto and Sendjaya (2008) Davis (1989), Doll et al. (1994), Yoo and Donthu (2001), Zeithaml et al. (2001), Loiacono et al. (2002), Gant and Gant (2002), Quesenbery (2007); Tate et al. (2007); Tojib, Sugianto and Sendjaya (2008), Tate et al. (2009) 5. Safety Wooldridge, Jennings and JKinny (2000), Wolfinbarger and Gilly (2002), Zeithaml et al. (2002), van Enginlen (2004), Tate et al. (2007); Tojib, Sugianto and Sendjaya (2008), Tate et al. (2009) 6. Convenience Zeithaml et. al. (2001), Zhu, Seigel and Madnick (2001), Brian Caulfield (2002), Papazouglou (2003); Tojib, Sugianto and Sendjaya (2008) 7. Realization Rouvoy, Barone, Ding, Eliassen, Hallsteinsen and Lorenzo (2009), Erradi, Anand, Kulakarni (2006), Zhonghua and Erfeng (2010)

6. Research Method and Tools A mixed method of conducting research is adopted because it employs a combination of qualitative and quantitative methods. According to Tashakkori and Creswell (2007, p. 4), mixed method of research is a research in which the investigator collects and analyze data, integrates the findings and draws inferences using both qualitative and quantitative approaches or methods in a single study or program inquiry. Among the four types of Mixed-Method of Research mentioned by Creswell (2008, p. 557), it is the

08-04-2013

8 fourth method i.e. Exploratory Mixed Method design has been adopted. Exploratory design collects data sequentially in two phases. It first explores the phenomenon followed by quantitative data to explain the relationship found in the qualitative data. Since there are limited number of instruments (Tate and Evermann, 2009; Hu, Zhao and Guo, 2009) are available in the research literature that can be claimed to have theoretical basis while suggesting the service quality dimensions. However some of the studies (Lociacono et al., 2000) have not clearly mentioned that their studies have adopted mixed methods of research and truly explored and derived their service quality dimensions based on the underlying conceptual theories and models. For example Tan, Benbasat and Cenfetelli (2013) have partly adopted this methodology in the context of E-Government Service Quality. A number of researchers have evaluated the quality of site-based educational services using variations of ServQual instrument (Holdford and Reinders, 2001; Sherry, Bhat, Beaver and Linng, 2004) unfortunately the ServQual and e-SQ and web quality instruments are not directly applicable to online educational environment (Shaik, Lowe and Pinegar, 2006). Hence it is felt to design and propose a general purpose new instrument called WePoEServQual Instrument based on service quality gap model to measure electronic service quality especially for the case of Portals and verify its applicability in the context of educational web portals within the university environment.

A self-administered questionnaire has been designed by adopting summated scale to capture responses using 5-point Likert Scale against thirty (30) items. A summated scale as name implies contain multiple items. Proposed thirty (30) multiple items will be combined or summed for producing scores for each of the seven (7) corresponding service quality dimensions under which these items are grouped. Each item in a scale is represented in the form of a either statement or a question, and respondents are asked to give rating about this statement or question in a five-option Licker-type scale (Gliem and Gliem, 2003). Likerttype scales are used to gather responses in social sciences, marketing, business and other disciplines related to attitudes, emotions, opinions, personalities and descriptions of peoples environment (Gliem and Gliem, 2003). Bertram (2006) mentioned the definition of Likert Scale as: A psychometric response scale primarily used in questionnaires to obtain participants preferences or degree of agreement with a statement or set of statements. Respondents are asked to indicate their level of agreement with a given statement by way of an ordinal scale. Though there are many variations of the scales with ranges as 4point scale, 5-point scale, 6-point scale, 7-point scale, and 9-point scale, most commonly used is seen as 5-point scale. Survey on EDUGATE Portal is designed for the purpose of gathering opinions about portals service quality. Typically respondents are instructed to select one of the five responses: strongly agree, agree,

08-04-2013

9 undecided or neutral, disagree and strongly disagree. The specific responses to the items are combined so that individuals with the most favourable opinions will have the highest scores while individuals with the least favourable opinions will have the lowest scores (Gliem and Gliem, 2003).

The dimensions and items considered for building the questionnaire are adopted based on the WePoEServQual Instrument. Developing measurement instruments entails the principle of deduction, an approach that allows the research to be conducted with reference to hypothesis and ideas inferred from the theory, with the conviction of a hypothesis that can be tested will thereby allow explanations of laws or principles to be assessed (Bryman, 2008, p. 13). However, in testing hypothesis, operationalization of the concepts, involving devising measures for the abstract concepts, is a necessary condition. Considering formal conceptual definitions and properties derived from the abstract concepts are necessary for validating the constructs of the hypothesis (Wacker, 2004).

WePoEServQual Instrument is consisting of seven electronic service dimensions and thirty items that are selected from the literature review given in Table 1. These dimensions and items are also derived and mapped to the four service quality gaps proposed by Parasuraman et al. (1985).

These thirty items of the seven dimensions are transcribed and transformed into the corresponding questions. These 30 items, 7 dimensions, overall service quality and overall customer satisfaction are considered as electronic service quality research variables with codes for statistical computation purposes.

The survey questions are customized to suit both service providers and service requesters. The questionnaire is posted online using at questionpro.com. QuestionPro is web based software to create and distribute surveys online. QuestionPro will take care of collecting and recording the responses. Results can be monitored real time. It has built-in tools for providing the statistical data analysis and viewing of results obtained based on the survey data. These reports can be transferred or exportable to word, excel or SPSS acceptable formats with graphical display of bar and pie charts (QuestionPro web site, 2013).

7. Research Variables This section explains various types of research variables used for hypothesis testing with their corresponding codification of research variables.

08-04-2013

10 Independent Variables Service Quality is considered as the independent variable of measure. The thirty independent research variables are: viz. (i) ABDI (ii) ADCI (iii) ABMW (iv) AFUM (v) AOCC (vi) RHSPL (vii) RFUS (viii) RPRG (ix) RMSG (x) FRI (xi) FUDI (xii) FSCI (xiii) FVTS (xiv) USC (xv) UCSC (xvi) UOHL (xvii) UMMI (xviii) SCA (xix) SAA (xx) SPA (xxi) ST (xxii) CMIC (xxiii) CAS (xxiv) CCC (xxv) CPCS (xxvi) CAWS (xxvii) CTFS (xxviii) CPAR (xxix) RSDSA (xxx) RSAPL.

Dependent Variables Seven Service Quality dimensions are coded as: (i) ACC (ii) RESP (iii) USB (iv) FU (v) SFT (vi) CONV and (vii) RLZ Overall Portal Service Quality is coded as: OPSQ and Overall Customer Satisfaction is coded as: OCS Hypotheses declaration Customer satisfaction is considered as the dependent variable on Overall Portal Service Quality. Overall Portal Service Quality has a positive influence on the Overall Customer Satisfaction. Irrespective of whether there is a linear or non linear relationship existing, it is hypothetically proposed that the increase in customer satisfaction is proportional to the increase in service quality.

Fig. 1 A Theoretical Framework for (Self-Service Oriented) Web Portal Service Influencing Online Service Quality and Customer Satisfaction.

08-04-2013

11 Hypothesis Null Hypothesis Alternate Hypothesis Statistical Proposed Methods for

Hypothesis Testing 1. H01


=

There is no difference H1 = External customers have Mean value, z-value expectations than (and/or t-value) and p-value.

between (internal) customer higher

expectations and (external) Internal customer expectations customer perceptions 2. H02


=

Each of the Seven H2

Each of the Seven Internal

consistency and

Dimensions of the Portal Dimensions of the Portal based Validation

based web portal positively web portal cannot influence Reliability test using influences overall service overall service quality and Cronbach Alpha value Overall Portal Quality and

quality and hence said to hence cannot be claimed that for constitute the

instrument, they constitute the instrument, Service

which measures the Overall which measures the Overall interpretation Portal Service Quality (self- Portal Service Quality (self- Spearman-Brown service oriented) service oriented)

Prophecy for testing consistency among

each service quality gap of the instrument 3. H03


=

Overall

Customer H3 Service by

Overall

Customer Confirmatory

Factor

Satisfaction is influenced by Satisfaction has no influence Analysis that confirms Overall Quality Portal Overall Portal Service that each factor of the instrument has a

Quality

positive influence on overall service quality and overall customer satisfaction.

8. Population There are two kinds of population: (i) Service Providers and (ii) Service Requesters. Service Providers of EDUGATE Portal includes staff from Deanship of e-Transactions and Communication, Deanship of Admissions and Registration, Deanship of Admission and Registration and Student Affairs in the province south of Riyadh and Deanship of Graduate Studies. But we targeted approximate number of staff are from all these Deanships and is estimated as: nnnnnnn.

08-04-2013

12

Total Service Requesters of EDUGATE portal includes all faculty, staff and students of King Saud University. Ministry of Higher Education web site informs that the status of total enrollment is 66,174 and the total staff or 5,994 during the year 2010 (MoHE web site, 2010, August, 4). But the targeted and distribution population is expected from College of Business Administration and College of Computers and Information Systems.

8.1 Testing for the Mean of a Population Two methods can be described for the size of the population that is intended to be targeted with the two kinds of population viz. (i) service providers (ii) service requesters. It is expected that the size of the population of service providers is significantly less than the size of the service requester population. Corresponding to the size of the sample the following criteria has been set and summary of the choice of statistical table has been given (Anonymous, Undated) as follows: Population Standard Deviation Sample Size Large Small Very Small Known Standard Error = (for Population) Normal tables: z-value and p-value Normal tables: z-value and p-value Population Variances and Distributions are not known (Assumption: Distributions of the two groups are the same under the null hypothesis) or *To conduct Fisher test, A demarcation level bench mark is required to be set to categorize them into guessed correctly and guessed incorrectly (calculation of p-value) Unknown Standard Error = (for Sample) Normal tables: z-test t-tables: t-test Non-parametric MannWhitney U test (rather than unpaired t-test) or Chi-Square or G-Test (>5) Fishers Exact Test (if n <= 5)

Test statistics are compared with the critical values of the normal distribution at a specific significance level for deciding whether to reject the null hypothesis (H0) or not. The following are some of the useful and relevant equations in calculating test statistics. 1. The Population Mean Value of Provider and that of Requester are given as PMP and PMR respectively then the Population Mean Difference (PMD) = PMP-PMR; Population Mean is usually denoted by symbol.

08-04-2013

13 2. As it is not possible to gather the complete population data of King Saud University, a sample will be collected using a questionnaire based survey posting online using any of the free online survey tools such as questionpro.com or suveymonkey.com 3. Thus the population mean is estimated using the sample mean . This estimate tends to miss by an

amount called the standard error (SE) of the mean, which is calculated as: Standard error = where

4. Compute Measure of variability or the Standard Error of the Difference (SED = difference between service provider i.e. SEP and service requester i.e. SER), Calculate SEP and SER values based on the above equation for standard error. Measure of Variability (MoV) or Standard Error of the Difference (SED) = 5. Sample Mean is usually denoted by the symbol. 4. Now the test statistic: Z-value = Samples Mean Difference (SMD) Zero (0) (as we expect there should not be any difference)/ Standard Error of the Difference (SED) or Measure of Variability (MoV). i. e. Z (Hypothesis) = [(SMD-0)/SED] > Calculation of test statistics (z-value) is valid only for large sample sizes. 6. Calculating the t-value: In many real-world cases of hypothesis testing, standard deviation of the population is not known. In such cases, it must be estimated using the sample standard deviation. When we go for sample collections, when the sample size obtained is too small then in such cases t-test values are to be calculated against z-test values. To calculate t, first sample standard deviation is required to be calculated using the formula: (Sample Standard Deviation) s = (Standard error of the Mean) t = [(sample mean)
( )

where n-1 is the degree of freedom.

= s/

- (null hypothesis mean value) 0]/

While standard normal distribution is used for calculating p-value and single sample z-test, for a single sample t-test, t-distribution must be used with n-1 degrees of freedom. Usually, standard values can be available in the tabular form for the normal distribution and t-distribution. In fact, the t-distribution with df= is identical to use the standard normal distribution (Weaver, 2011). 7. Calculate the p-value: A p-value is the probability of obtaining a sample outcome, given that the value stated in the null hypothesis is true. The p value of the sample outcome is compared to the level of statistical significance. When a null hypothesis is rejected, a result is significant (Privitera, 2012).

08-04-2013

14 8.2 Significance of Cronbachs Alpha Cronbachs Alpha is a tool for assessing the reliability of scales of measuring for example the service quality. Various types of measurement scales such as nominal scale, ordinal scale, interval scale and ratio scales are described in Sekaran and Bougie (2009). As individuals attempt to quantify constructs which are not directly measurable and often use multiple-item scales and summated ratings to quantify the constructs (Gliem and Gliem, 2003). Summated scales are often used in survey instruments to probe underlying constructs that the researcher wants to measure. The constructs that are usually represented by some variable name consists of indexed responses to dichotomous or multi-point questionnaires, which are later summed to arrive at a resultant score associated with a particular respondent. These variables derived out from test instruments are declared to be reliable only when they provide stable and reliable responses over a repeated administration of the test. Cronbachs Alpha determines the internal consistency or average correlation of items in a survey instrument to gauge its reliability (Santos, 1999). Further Santos (1999) explained the use of the ALPHA option of the PROC CORR procedure from SAS statistical tool to assess and improve upon the reliability of variables derived from summated scales. However Cronbachs Aplha can be calculated using other statistical tool such as SPSS and Microsoft Excel. There are some web sites where the vale Cronbachs Alpha can be computed online and a readymade Microsoft Excel documents are also available.

EDUGATE Portal Evaluation has involved gathering data about 30 items represented by the 30 research variables grouped under 7 e-service quality dimensions.

Data on the various, multi-item constructs representing different components of service quality and customer satisfaction are required to be tested for reliability and validity by computing Cronbachs Alpha values. A Factor Analysis may be required to be performed to assess the convergent validity.

All individual loadings are recommended to be greater than equal to 0.5 (Hair et al., 1998; Mohammed and Alhamadani, 2011). Usually a reliability coefficient above 0.7 is considered sufficient for exploratory studies (Nunnally, 1967).

Significance of gap between perceived satisfaction and the importance of all the service quality dimensions can be obtained from t-test results. The relationship between service quality and levels of customer satisfaction can be tested by conducting a regression analysis (Loke, Taiwo, Salim and Downe, 2011).

08-04-2013

15 9. Research Sample This study is conducted based on a pilot survey sample. Statistical society of the present research (also called targeted population) is considered as: staff from Deanship of e-Transactions and Communication, Deanship of Admissions and Registration, Deanship of Admission and Registration and Student Affairs in the province south of Riyadh and Deanship of Graduate Studies for determining the Service Providers Sample size and Faculty, Staff and Students of College of Administrative Sciences and College of Computer Science and Information Systems for determining the Service Requesters Sample size

It is proposed that either (a) to use random sampling technique assuming the data as a continuous data or (b) to use the stratified sampling technique by assuming the categorized samples with above statistical guidelines mentioned or discussed in this paper.

The selection of the confirmation on which assumption has to be finalized will depend on facts of the pilot survey.

So in order to estimate the size of the statistical sample, it is required to conduct a pilot survey based study. The size of the statistical sample is now can be estimated based on the calculations performed on the data collected from the pilot survey based study using Cochrane formula (Barlett, Kotrlik and Higgins, 2001) Cochrans sample size formula for continuous data is given as: n0 = (t)2 * (sd)2 / (mem)2 Where n0 = the required sample size t = value for selected alpha level of 0.025 in each tail = 1.96 (the alpha level of 0.05 indicates the level of risk the researcher is willing to take that true margin of error may exceed the acceptable margin of error) s = estimate of standard deviation in the population = nnn (estimate of variance deviation for 5 point scale calculated by using 5 [inclusive range of scale] divided by 4 [number of standard deviations that include almost all (approximately 98%) of the possible values in the range] d = acceptable margin of error for mean being estimated = nnn (Number of points on primary scale* acceptable margin of error; points on primary scale = 5; acceptable margin of error = nnn [error the researcher is willing to accept])

08-04-2013

16 If the sample size exceeds 5% of the population (targeted population * 0.05 = nn), Cochrans correction formula should be used to calculate the estimated minimum returned sample size and hence

n = n0/ (1+n0/targeted population) When the oversampling scenario has to be considered, the size can be estimated as follows: Assuming the anticipated return rate as 65%, the adjusted sample size (after keeping the uncertainties of losses in questionnaires, ignoring the distributed questionnaire, forgetting to answer the questionnaires etc. in mind) nadj = n/0.65 (65% is considered based on the prior research experience (Cochran 1977)) 10. Plan for Hypothesis Testing Hypothesis testing or significance testing is a method for testing a claim or hypothesis about a parameter in a population, using data measured in a sample (Privitera, 2012). Steps involved in Hypothesis Testing (Keller, 2009; Groebner et. al. 2005): (i) There are two hypotheses. 1. Null hypothesis and 2. Alternate hypothesis (ii) Testing hypothesis begins with the assumption that the null hypothesis is true. (iii) Determine the mean value of the sample for constructing the Null hypothesis and Alternative hypothesis from service providers population. Now for example, the Null hypothesis can be express as: H0 = The average portal service quality is believed at least EQUAL TO The mean value calculated from the providers sample or it can be express as: The population mean difference of service provider and service requester are assumed zero. (iv) The goal of the process is to determine whether there is enough evidence to infer the alternate hypothesis is true. Alternative Hypothesis is the Opposite of Null Hypothesis. Now for example, the Alternative hypothesis can be express as H1 = The average portal service quality is NOT EQUAL TO the mean value calculated from the providers sample or The population mean difference of service provider and service requester are Not Equal to Zero. It is set as NOT EQUAL TO, because we do not want to be bias. Hence this is what is required to be determined. The average portal service quality calculated as the mean value from service providers has to be set as the benchmarking value.

08-04-2013

17 It means the mean value of service quality calculated from service providers is to be compared against the mean value of the service quality calculated from service requesters. Two possible decisions can be made here: (1) Less than (SQ P < SQ R) concludes that there is enough evidence to support the alternative hypothesis. It means the portal do not possess better service quality (2) Greater than (SQ
P

> SQ R) concludes that there is not enough evidence to support the alternative

hypothesis. It means the portal possess better service quality. (v) Two possible errors can be made in any test. A Type I error occurs when we reject a true null hypothesis and a Type II error occurs when we dont reject false hypothesis. The probabilities of Type I and Type II errors are: P (Type I error) = P (Type II error) = (vi) Collect, calculate and provide inferences based on computing the mean value of the sample and summarizing the data into test statistic with use the test statistic in terms of determining the z-value and/or t-value and p-value.

11. Data Analysis and Results

11.1 Discussion on Internal Consistency High quality tests are important to evaluate the validity and reliability of the proposed instrument in a research study. Internal consistency is concerned with interrelatedness of the proposed set of items. It describes the extent to which all the items that are proposed can measure the same concept or construct. Cronbachs Alpha is a commonly employed index of test reliability. It ensures that the research study is reliable and valid against the proposed items of the service quality measurement (Tavakol and Dennick, 2011). It reflects that the proposed WePoEServQual Instrument becomes valid to measure web portal electronic service quality when the Cronbach Alpha value is more than 0.7 (Gliem and Gliem, 2003, p. 87). For this purpose Cronbach Alpha value is calculated against two sets of data viz. service providers data and service requesters data using Excel based Reliability Calculator created by Siegle. Reliability has calculated for testing consistency against (i) overall service quality and (ii) service quality gaps (viz. service communication gap, service information gap, service standards gap and service performance gap) and (iii) service quality dimensions.

08-04-2013

18 Validating Internal Consistency of WePoServQual Instrument Service Provider 5 subjects (48 manipulated subjects) Overall Service Quality 0.884 (0.890) 0.921 The instrument is said to be reflecting consistency against overall service quality Inference- Acceptable from both provider and requester perspectives Service Quality Gaps Service Performance Gap 0.571 (0.854) 0.795 Consistent against Service Performance Gap Inference- Acceptable from both Service Standards Gap 0.930 (0.937) 0.835 Consistent against Service Standards Gap Inference- Acceptable from both Service Information Gap 0.831 (0.831) 0.827 Q 13 (Item 13 is to be deleted) Consistent against Service Information Gap Inference- Acceptable from both Service Communication Gap -1 (-1.05) 0.706 No improvement even after deleting items or adding items. Inconsistent against service communication gap from providers perspective. But from the service requesters perspective, it proved consistent against Service Communication gap Inference- Acceptable from Requesters side and unacceptable from Providers perspective Service Requester Inference/Remarks/Comments

08-04-2013

19 Service Quality Dimensions Realization 0.571 (0.854) 0.795 Consistent against Realization Inference- Acceptable from both sides Convenience 0.93 (0.942) 0.711 Consistent against Convenience Inference- Acceptable from both sides Safety 0.462 (0.557) 0.855 Consistent against Safety Q18-Item 18 Deleted for Service Providers Perspective No Item Deleted for Service Requesters perspective Inference- Acceptable from requester side but not acceptable from provider side Usability 0.857 (0.876) (0.747) Consistent against Usability Inference- Acceptable from both sides Functional Usefulness 0.825 (0.831) (0.827) Consistent against Functional Usefulness Q-13 or Item 13 Deleted Inference- Acceptable from both sides Responsiveness 0.75 (0.75) (0.652) Consistent against Responsiveness Q7-Item 7 Deleted for Providers No Item deleted for Service Requesters Inference- Acceptable from Providers side and Questionable from Requesters side Accessibility -1.838 (-1.025) (0.600) Inconsistent against accessibility Item 4 Deleted from Service Providers perspective No Item Deleted from Service

08-04-2013

20 Requesters Questionable Providers - Unacceptable Inference- Questionable but wanted to make it acceptable as it is contributing in a positive influence manner to service communication gap as well as to overall service quality The providers survey data analysis of Edugate Portal of King Saud University reported a value of 0.884 against 5 subjects and 0.890 against randomly manipulated 48 subjects. The requesters survey data analysis of Edugate Portal of King Saud University reported a value of 0.921 against 48 subjects. More importantly, alpha is grounded in the tau-equivalent model, which assumes that each test item measures the same latent trait on the same scale. As the number of test items (= 30) are significant in number, it does not violate the assumption of tau-equivalence and hence do not underestimate the reliability. So our proposed model with 30 set of items is a valid and reliable e-service quality instrument for measuring web portal service quality of education portals.

11.2 Discussion on the difference between Customer Expectations and Customer Perceptions Initially it is planned and executed for obtaining two stratified samples (from providers perspective and requesters perspective). However we did not consider the providers sample as we could get a response of 5 subjects. We could get 48 responses of service requesters (customers) from the online survey (by QuestionPro). So, only customers data is considered for hypothesis testing. The QuestionPro has provided with default calculation values of mean, standard deviation and standard error for all the posed statements in the Lickert Scale 1-5 and to questions in varied range of scale as well as for dichotomous type too. However the focus of statistical calculations has been limited to validate the proposed WePoServQual Instrument as well as evaluating the Edugate portal based on this proposed instrument. The WePoServQual Instrument is after validated against the internal consistency; we considered the mean value of the 30 items of the instrument as the bench mark to validate the customer responses against each of these 30 items. Results show a negative gap score. It means the customers perceived better service quality of EDUGATE portal than expected (expected is set as the overall mean of the perceived). The service quality of EDUGATE portal is matched the expectations of the customer with 81% probability @ 95% two-tailed confidence level from the service quality dimensions perspective

and with 69% probability @ 95% two-tailed confidence level from the service quality gaps

08-04-2013

21 perspective. We applied a triangulation technique by including two questions posted in the questionnaire. In your opinion (i) what is the overall service quality (ii) what is your satisfaction towards Edugate portal services. The overall service quality reported as Acceptable @ 95% confidence level. However there is not customer satisfaction. A report is generated to find the relationship between overall service quality and customer satisfaction from the reports wizard of the QuestionPro by pivoting the service quality against the customer satisfaction. The chi-square based results indicated that there is no relationship between two. As a part of triangulation, z-statistics are also generated for finding this relationship from each of the overall service quality and customer satisfaction as variable. The results are matching with chi-square test results indicating no relationship between overall service quality and customer satisfaction.

These results are in compliance with the arguments of Somaratna, Peiris and Jayasundara, (2010) as mentioned in the section 2-Aim and Objectives of the Research of this document.

13. Conclusions Based on the results of hypothesis testing, it can be concluded that WePoServQual instrument can be used effectively for measuring and evaluating the Portal based online service quality.

However in order to claim its consistency, it is suggested to be tested against other domains such as Online Banking Services and Online Government Services too.

References Abdullah, F. (2006), The development of HEdPERF: a new measuring instrument of service quality for the higher education sector, International Journal of Consumer Studies, Vol. 30, Iss. 6, pp. 569581. Al-Mushasha, N. F. and Hassan, S. (2011), Chapter 19: A Model for Mobile Learning Service Quality in University Environment. In Khalil and Weippl (Eds.) Innovations in Mobile Multimedia communications and Applications: New Technologies, IGI Global, USA. Altman, E and Hernon, P. (1998), Service Quality and customer satisfaction do matter, American Libraries, Vo. 29, No. 7, pp. 53-55. AlSudairi (2012), E-Service Quality Strategy: Achieving Customer Satisfaction in Online Banking, Journal of Theoretical and Applied Information Technology, Vol. 38, No. 1, pp. 6-24. AlSudairi and Vasista (2013), Suggesting Service Quality Model for E-Governance, ECEG13, June 1314, Como, Italy, Full Paper Accepted.

08-04-2013

22 Anonymous (Undated), Hypothesis Testing, [Online]

www.palgrave.com/business/taylor/taylor1/lecturers/lectures/handouts/hChap7.doc (Accessed on March 18, 2013) Barlett, J. E., Kotrlik, J. W. and Higgins, C. C. (2001), Organisational Research: Determining Appropriate Sample Size in Survey Research, Information Technology, Learning and Performance Journal, Vol. 19, No. 1, pp. 43-50. Bertram, D. (2006), Likert Scales, CPSC 681 Topic Report, pp. 1-10, [Online]

http://poincare.matf.bg.ac.rs/~kristina//topic-dane-likert.pdf (Accessed on March 24, 2013) Bryman, A. (2008), Social Research Methods, 3rd Edition, Oxford University Press. Cap 252 (2010), Cap 252 Project, http://cap252ksu.files.wordpress.com/2010/09/king-saud-

university.pdf (Accessed on March 19, 2013) Cochran, W. G. (1977), Sampling techniques (3rd ed.). New York: John Wiley & Sons. Creswell, J. W. (2008), Educational research: Planning, conducting and evaluating quantitative and qualitative research, 3rd Edition, Upper Saddle River, NJ: Pearson Prentice Hall. EDUGATE Portal Address, EDUGATE.ksu.edu.sa, http://www.siteglimpse.com/EDUGATE.ksu.edu.sa (Accessed on March 19, 2013) Filiz, Z. (2007), Service quality of University Library: a survey amongst students at Osmangazi University and Anadolu University. Ekonometri ve Istatistik Say. 1, 1-19,

http://eidergisi.istanbul.edu.tr/sayi5/iueis5m1.pdf (Accessed on March 19, 2013) Gliem, J. A. and Gliem, R. R. (2003), Calculating, Interpreting and Reporting Cronbachs Aplha Reliability Coefficient for Likert-Type Scales, Midwest Research to Practice conference in Adult, Continuing, and Community Education. Groebner et. al. (2005), Chapter 10: Hypothesis testing, Business Statistics: A Decision-Making Approach, 6 ed. Prentice-Hall, Inc. Hair, J. F. Jr., Anderson, R. E. , Tatham, R. L. and Black, W. C. (1998), Multivariate Data Analysis, 5th Ed. Prentice-Hall, International, Upper Saddle River, N J. Holdford, D., and Renders, T. (2001), Development of an instrument to assess student perceptions of the quality of Pharmaceutical Education, American Journal of Pharmaceutical Education, 65, p. 125131. Herrera, M, Moraga, A., Caballero, I and Calero, C. (2010), Quality in Use Model for Web Portals (QiUWeP), Proceedings of the 10th International Confernece on Current Trends in Web Engineering, Springer-Verlag [Online] (Accessed on March 26,

http://gplsi.dlsi.ua.es/congresos/qwe10/fitxers/QWE10_Herrera.pdf 2013).

08-04-2013

23 Hu, C., Zhao, Y. and Guo, M. (2009), AHP and CA Based Evaluation of Website Information Service Quality: An Empirical Study on High-Tech Industry Information Centre Web Portals, Journal of Service Science and Management, Vol. 2 No. 3, pp. 168-180. IT Student Guide V5 (2012, p. 12), Academic Affairs: KSU Registration System (EDUGATE), http://ccis.ksu.edu.sa/sites/ccis.ksu.edu.sa/files/Student_Guide%2520V5%2520(Nov-2012).pdf (Accessed on March 19, 2012) Kabir. M. H. And Carlsson, T. (2010) Service Quality-Expectations, perceptions and satisfaction about Service Quality at Destination Gotland A case study, Master thesis in Business Administration, Gotland University, Sweden. Keller, G. (2009), Chapter 11: Introduction to hypothesis testing in Statistics for Management and Economics, Ninth Edition, South Western-Cengage Learning. Lociacono, E. T., Watson, R. T. and Goodhue, D. L. (2002), WEBQUAL: Measure of web site quality, Marketing Educators Conference: Marketing Theory and Applications, 13, pp. 432-437. Loke, S., Taiwo, A. A., Salim, H. M. and Downe, A. G. (2011), Service Quality and Customer Satisfaction in a Telecommunication Service Provider, International Conference on Financial Management and Economics, IPEDR vol. 11, LACSIT Press, Singapore. Lovelock, C. and Wirtz, J. (2007), Service Marketing People, Technology, Strategy, Pearson Prentice Hall. MoHE web site (2010, August, 4) King Saud University,

http://www.mohe.gov.sa/en/studyinside/Government-Universities/Pages/KSU.aspx Mohammed, A. A. S. and Alhamadani, S. Y. M. (2011), Service Quality Perspectives and Customer Satisfaction in Commercial Banks working in Jordan, Middle Eastern Finance and Economics, Iss. 14, Euro Journals Publishing. Nunnally, J. (1967), Psychometric Theory, McGraw-Hill, Inc., New Uork, NY Parasuraman, A., Zeithaml, V. A. and Berry, L. L. (1985), A conceptual model of service quality and its implications for future research, Journal of Marketing, Vol. 49 No. 4, pp. 41-50. Parasuraman, A., Berry, L.L. and Zeithaml, V. A. (1991), Understanding customer expectations of service, MIT Sloan Management Review (April 15, 1991),

http://sloanreview.mit.edu/article/understanding-customer-expectations-of-service/ (Accessed on March 18, 2013) Privitera, G. J. (2012), Student Study Guide with SPSS Workbook for Statistics for the Behavioural Sciences: Chapter 8: Introduction to Hypothesis Testing, SAGE Publications, Inc. Pyrczak, F. and Bruce, R. R. (2011), Writing empirical research reports: a basic guide for students of the social and behavioural science, 7th Edition, Pyrczak Publishing, 162 pages.

08-04-2013

24 QuestionPro web site (2013), How it works?, http://www.questionpro.com/home/howItWorks.html (Accessed on March 19, 2013). Sahney, S. Banwet, D. K. and Karunes, S. (2004), Conceptualising total quality management in Higher Education, The TQM magazine, Vol. 16, pp. 145-159. Sahut, J. and Kucerova, Z. (2003), Enhance Internet Banking Service Quality with Quality Function Deployment Approach, The Journal of Internet Banking and Commerce, November, Vol. 8, No. 2. http://www.arraydev.com/commerce/jibc/0311-09.htm (Accessed March 18, 2013). Santos, J. R. (1999), Cronbachs Alpha: A Tool for Assessing the Reliability of Scales, Journal of Extension, Vol. 37 No. 2. Sekaran, U. and Bougie, R. (2010), Research Methods for Business A Skill Building Approach, 5th Edition, NY: Wiley & Sons Inc, 448 Pages Seth, N., Deshmukh, S. G. & Vrat, P. (2005), Service quality models: a review, International Journal of Quality & Reliability Management, Vol. 22 Iss: 9, pp.913 949 Shaik, N., Lowe, S. and Pinegar, K. (2006), DL-sQUAL: A Multiple-Item Scale for Measuring Service Quality of Online Distance Learning programs, Online Journal of Distance Learning Administration, Vol. IX, No. II, University of West Georgia Distance Education Centre [Online] http://www.westga.edu/~distance/ojdla/summer92/shaik92.htm (Accessed on March 26, 2013). Sherry, C., Bhat, R., Beaver, B. and Linng, A. (2004), Students as customers: The expectations and perceptions of local and international student, HERDSA conference proceedings. Siegle, D. Reliability Calculator, [Online],

http://www.gifted.uconn.edu/siegle/research/Instrument%20Reliability%20and%20Validity/reliabil itycalculator2.xls (Accessed March 23, 2013) Somaratna, S., Peiris, C. N. And Hayasundara, C. (2010), User expectations versus user perception of service quality in University libraries: a case study, ICULA2010, [Online]

http://archive.cmb.ac.lk/research/bitstream/70130/168/1/ccj4.pdf (Accessed on March 19, 2013) Statmyweb (2012), EDUGATE.ksu.edu.sa, http://www.statmyweb.com/site/edugate.ksu.edu.sa

(Accessed on March 19, 2013). Tan, C. and Benbasat, I. and Cenfetelli, R. T. (2013), IT Mediated customer service content and delivery in Electronic Governments: An Empirical Investigation of the Antecedents of Service Quality, MIS Quarterly, vol. 37, No. 1, pp. 77-109. Tashakkori, A. and Creswell, J. W. (2007), The new era of mixed methods, Journal of Mixed Methods Research, Vol. 1 No. 1, pp. 3-7.

08-04-2013

25 Tate et al. (2007), Perceived Service Quality in a University Web Portal: Revising the E-Qual Instrument, Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS07), IEEE. Tate et al. (2009), Chapter 4: Stakeholder Expectations of Service Quality in a University Web Portal. In D. Oliver et al. (eds.) Self-Service in the Internet Age, DOI 10.10007/978-1-8-84800-207-4_4, Springer-Verlag Lonon Limited. Tavakol and Dennick (2011), Making sense of Crobachs alpha, International Journal of Medical Education, vol. 2, pp. 53-55. Tojib, D. R., Sugainto, L. And Sendjaya, S. (2008), User satisfaction with business-to-employee portals: conceptualization and scale development, European Journal of Information Systems, 17, pp. 649667 Wacker, J. G. (2004), A Theory of formal conceptual definitions: Developing theory building measurement instruments, Journal of Operations Management, Vol. 22, 629-650. Wang, I. And Shieh, C. (2006), The relationship between service quality and customer satisfaction: the example of CJCU library, Journal of Information & Optimization Sciences, Vol. 27, pp. 193-209. Weaver, B. (2011), Hypothesis Using zand t-tests,

http://www.angelfire.com/wv/bwhomedir/notes/z_and_t_tests.pdf (Accessed March 19, 2013) Yang, Z., and Fang, X. (2004), Online Service Quality Dimensions and their relationships with satisfaction, International Journal of Service Industry Management, Vol. 15, No. 3, pp. 302-326. Zeithaml, V., Parasuraman, A. and Malhotra, Y. (2002), Service Quality delivery through web sites: A critical review of extant knowledge, Journal of the Academy of Marketing Science, 30: pp. 262375. Zeithaml, V. A., Parasuraman, A. and Malhotra, A. (2002), An Empirical Examination of the Service Quality-Value-Loyalty Chain in Electronic Channel, Working Paper, University of North Carolina.

08-04-2013

Anda mungkin juga menyukai