Anda di halaman 1dari 20

Master of Business Administration- MBA Semester 3

MB0050 Research Methodology - 4 Credits (Book ID: B1206) Assignment Set - 2 (60 Marks) Note: Each question carries 10 Marks. Answer all the questions.

Q-1. Differentiate between nominal, ordinal, interval and ratio scales with an example of each. Answer: 1. Nominal measurement This level of measurement consists in assigning numerals or symbols to different categories of a variable. The example of male and female applicants to an MBA program mentioned earlier is an example of nominal measurement. The numerals or symbols are just labels and have no quantitative value. The numbers of cases under each category are counted. Nominal measurement is therefore the simplest level of measurement. It does not have characteristics such as order, distance or arithmetic origin. 2. Ordinal measurement In this level of measurement, persons or objects are assigned numerals which indicate ranks with respect to one or more properties, either in ascending or descending order. Example Individuals may be ranked according to their socio-economic class, which is measured by a combination of income, education, occupation and wealth. The individual with the highest score might be assigned rank 1, the next highest rank 2, and so on, or vice versa. The numbers in this level of measurement indicate only rank order and not equal distance or absolute quantities. This means that the distance between ranks 1 and 2 is not necessarily equal to the distance between ranks 2 and 3.

Ordinal scales may be constructed using rank order, rating and paired comparisons. Variables that lend themselves to ordinal measurement include preferences, ratings of organizations and economic status. Statistical techniques that are commonly used to analyze ordinal scale data are the median and rank order correlation coefficients. 3. Interval measurement This level of measurement is more powerful than the nominal and ordinal levels of measurement, since it has one additional characteristic equality of distance. However, it does not have an origin or a true zero. This implies that it is not possible to multiply or divide the numbers on an interval scale. Example The Centigrade or Fahrenheit temperature gauge is an example of the interval level of measurement. A temperature of 50 degrees is exactly 10 degrees hotter than 40 degrees and 10 degrees cooler than 60 degrees. Since interval scales are more powerful than nominal or ordinal scales, they also lend themselves to more powerful statistical techniques, such as standard deviation, product moment correlation and t tests and F tests of significance. 4. Ratio measurement This is the highest level of measurement and is appropriate when measuring characteristics which have an absolute zero point. This level of measurement has all the three characteristics order, distance and origin. Examples Height, weight, distance and area. Since there is a natural zero, it is possible to multiply and divide the numbers on a ratio scale. Apart from being able to use all the statistical techniques that are used with the nominal, ordinal and interval scales, techniques like the geometric mean and coefficient of variation may also be used. The main limitation of ratio measurement is that it cannot be used for characteristics such as leadership quality, happiness, satisfaction and other properties which do not have natural zero points. The different levels of measurement and their characteristics may be summed up in the table below Levels of Characteristics measurement Nominal No order, distance or origin Ordinal Order, but no distance or origin Interval Both order and distance, but no

Ratio

origin Order, distance and origin

Q-2. What are the types of Hypothesis? Explain the procedure for testing Hypothesis. Answer: Various types of Hypothesis: 1. Descriptive hypothesis: These are propositions that describe the characteristics (such as size, form, or distribution) of a variable. The variable may be an object, person, organization, situation or event. Some examples are: The rate of unemployment among arts graduates is higher than that of commerce graduates. Public enterprises are more amenable for centralized planning. 2. Relational hypothesis: These are propositions, which describe the relationship between two variables. T h e
relationship suggested may be positive or negative correlation or causal relationship.

Some examples: Families with higher incomes spend more for recreation. The lower the rate of job turnover in a work group, the higher the work productivity. 3. Casual hypothesis: State that the existence of, or a change in, one variable causes or leads to an effect on another variables. The first variable is called the independent variable, and the latter the dependent variables the researcher must consider the direction in which such relationships flow.ie. Which are cause and which effect is 4. Working hypothesis: While planning the study of a problem, hypotheses are formed. Initially they are not be very specific. In such cases, they are referred to as Working hypothesis which are subject to modification as the investigation proceeds. 5. Null hypothesis:

These are hypothetical statements denying what are explicitly indicated in working hypothesis. They are formed in the negative statement.
For example:

There is no relationship between families income level a n d expenditure on recreation. Null hypothesis are formulated for testing statistical significance. Since, this form is a convenient approach to statistical analysis. As the test would nullify the null hypothesis, they are so called. There is some justification for using null hypotheses. They
conform to the qualities of detachment and objectivity to be possessed by are searcher. If the attempts to test hypotheses which he assumes to be true, it would appear as if he is not behaving objectively. The problem does not arise when he uses null hypotheses. Moreover, null

hypotheses are more exact. It is easier to reject the contrary of hypotheses than to confirm it with complete certainty. Hence the concept of null hypothesis is found to be very useful. 6. Alternate Hypothesis {Ha} It is a statement, which is accepted, after a null hypothesis is rejected based on the test result. Ex: If the null hypothesis is that there is no relationship between the eye colour of husbands and wives, it is rejected then automatically the alternative hypothesis is that there is relationship between the eye colour of husbands and wives is accepted. 7. Statistical hypothesis: There are statements about a statistical population. These are derived from a sample. These are quantitative in nature in that they are numerically measurable, e.g., Group A is older than Group B. 8. Common sense Hypothesis: These represent the common sense ideas. They state the existence of empirical uniformities perceived through day-to-day observations.Soldiers from upper-class are less adjusted in the army than lower class menFresh students conform to the conventions set up by seniors 9. Complex Hypothesis: These aim at testing the existence of logically derived relationships between empirical uniformities. For example, The concentric growth circles characterize a city.

10. Analytical Hypothesis:


These are concerned with the relationship of analytic variables. These hypotheses occur at the highest level of abstraction. These specify relationship between changes in one property and

changes in another. Q-3. What are the advantages and disadvantages of Case Study Method? How is Case Study method useful to Business Research? Answer. Advantages of Case Study Method
Case study of particular value when a complex set of variables may be at work in generating observed results and intensive study is needed to unravel the complexities. For example, an in-depth study of a firms top sales people and comparison with worst salespeople might reveal characteristics common to stellar performers. Here again, the exploratory investigation is best served by an active curiosity and willingness to deviate from the initial plan when findings suggest new courses of inquiry might prove more productive. It is easy to see how the exploratory research objectives of generating insights and hypothesis would be well served by use of this technique.

Disadvantages of Case Study Method


Blummer points out that independently, the case documents hardly fulfil the criteria of reliability, adequacy and representativeness, but to exclude them form any scientific study of human life will be blunder in as much as these documents are necessary and significant both for theory building and practice.

How is the Case Study method useful in Business Research? Answer: Case Study as a Method of Business Research In-depth analysis of selected cases is of particular value to business research when a complex set of variables may be at work in generating observed results and intensive study is needed to unravel the complexities. For instance, an in-depth study of a firms top sales people and comparison with the worst sales people might reveal characteristics common to stellar performers. The exploratory investigator is best

served by the active curiosity and willingness to deviate from the initial plan, when the finding suggests new courses of enquiry, might prove more productive Case study of particular value when a complex set of variables may be at work in generating observed results and intensive study is needed to unravel the complexities. For example, an in-depth study of a firms top sales people and comparison with worst salespeople might reveal characteristics common to stellar performers. Here again, the exploratory investigation is best served by an active curiosity and willingness to deviate from the initial plan when findings suggest new courses of inquiry might prove more productive. It is easy to see how the exploratory research objectives of generating insights and hypothesis would be well served by use of this technique. Making Case Study Effective John Dollard has proposed seven criteria for evaluating such adequacy as follows: i) The subject must be viewed as a specimen in a cultural series. That is, the case drawn out from its total context for the purposes of study must be considered a member of the particular cultural group or community. The scrutiny of the life histories of persons must be done with a view to identify the community values, standards and their shared way of life. ii) The organic motto of action must be socially relevant. That is, the action of the individual cases must be viewed as a series of reactions to social stimuli or situation. In other words, the social meaning of behavior must be taken into consideration. iii) The strategic role of the family group in transmitting the culture must be recognized. That is, in case of an individual being the member of a family, the role of family in shaping his behavior must never be overlooked. iv) The specific method of elaboration of organic material onto social behavior must be clearly shown. That is case histories that portray in detail how basically a biological organism, the man, gradually blossoms forth into a social person, are especially fruitful. v) The continuous related character of experience for childhood through adulthood must be stressed. In other words, the life history must be a configuration depicting the interrelationships between the peoples various experiences. vi) Social situation must be carefully and continuously specified as a factor. One of the important criteria for the life history is that a persons life must be shown as unfolding itself in the context of and partly owing to specific social situations.

vii) The life history material itself must be organized according to some conceptual framework; this in turn would facilitate generalizations at a higher level Q-4. What are the Primary and Secondary sources of Data? Answer. The sources of data may be classified into (a) primary sources and (b) secondary sources. Primary Sources of Data Primary sources are original sources from which the researcher directly collects data that have not been previously collected e.g.., collection of data directly by the researcher on brand awareness, brand preference, brand loyalty and other aspects of consumer behavior from a sample of consumers by interviewing them,. Primary data are first hand information collected through various methods such as observation, interviewing, mailing etc.

Advantage of Primary Data It is original source of data It is possible to capture the changes occurring in the course of time. It flexible to the advantage of researcher. Extensive research study is based of primary data

Disadvantage of Primary Data Primary data is expensive to obtain It is time consuming It requires extensive research personnel who are skilled. It is difficult to administer.

Methods of Collecting Primary Data Primary data are directly collected by the researcher from their original sources. In this case, the researcher can collect the required date precisely according to his research needs, he can collect them when he wants them and in the form he needs them. But the collection of primary data is costly and time consuming. Yet, for several types of social

science research required data are not available from secondary sources and they have to be directly gathered from the primary sources. In such cases where the available data are inappropriate, inadequate or obsolete, primary data have to be gathered. They include: socio economic surveys, social anthropological studies of rural communities and tribal communities, sociological studies of social problems and social institutions. Marketing research, leadership studies, opinion polls, attitudinal surveys, readership, radio listening and T.V. viewing surveys, knowledge-awareness practice (KAP) studies, farm managements studies, business management studies etc. There are various methods of data collection. A Method is different from a Tool while a method refers to the way or mode of gathering data, a tool is an instruments used for the method. For example, a schedule is used for interviewing. The important methods are (a) observation, (b) interviewing, (c) mail survey, (d) experimentation, (e) simulation and (f) projective technique. Each of these methods is discussed in detail in the subsequent sections in the later chapters.

Secondary Sources of Data These are sources containing data which have been collected and compiled for another purpose. The secondary sources consists of readily compendia and already compiled statistical statements and reports whose data may be used by researchers for their studies e.g., census reports, annual reports and financial statements of companies, Statistical statement, Reports of Government Departments, Annual reports of currency and finance published by the Reserve Bank of India, Statistical statements relating to Co-operatives and Regional Banks, published by the NABARD, Reports of the National sample survey Organization, Reports of trade associations, publications of international organizations such as UNO, IMF, World Bank, ILO, WHO, etc., Trade and Financial journals newspapers etc. Secondary sources consist of not only published records and reports, but also unpublished records. The latter category includes various records and registers maintained by the firms and organizations, e.g., accounting and financial records, personnel records, register of members, minutes of meetings, inventory records etc.

Features of Secondary Sources

Though secondary sources are diverse and consist of all sorts of materials, they have certain common characteristics. First, they are readymade and readily available, and do not require the trouble of constructing tools and administering them. Second, they consist of data which a researcher has no original control over collection and classification. Both the form and the content of secondary sources are shaped by others. Clearly, this is a feature which can limit the research value of secondary sources. Finally, secondary sources are not limited in time and space. That is, the researcher using them need not have been present when and where they were gathered.

Use of Secondary Data The second data may be used in three ways by a researcher. First, some specific information from secondary sources may be used for reference purpose. For example, the general statistical information in the number of co-operative credit societies in the country, their coverage of villages, their capital structure, volume of business etc., may be taken from published reports and quoted as background information in a study on the evaluation of performance of cooperative credit societies in a selected district/state. Second, secondary data may be used as bench marks against which the findings of research may be tested, e.g., the findings of a local or regional survey may be compared with the national averages; the performance indicators of a particular bank may be tested against the corresponding indicators of the banking industry as a whole; and so on. Finally, secondary data may be used as the sole source of information for a research project. Such studies as securities Market Behavior, Financial Analysis of companies, Trade in credit allocation in commercial banks, sociological studies on crimes, historical studies, and the like, depend primarily on secondary data. Year books, statistical reports of government departments, report of public organizations of Bureau of Public Enterprises, Censes Reports etc, serve as major data sources for such research studies.

Advantages of Secondary Data Secondary sources have some advantages:

Secondary data, if available can be secured quickly and cheaply. Once their source of documents and reports are located, collection of data is just matter of desk work. Even the tediousness of copying the data from the source can now be avoided, thanks to Xeroxing facilities. Wider geographical area and longer reference period may be covered without much cost. Thus, the use of secondary data extends the researchers space and time reach. The use of secondary data broadens the data base from which scientific generalizations can be made. Environmental and cultural settings are required for the study. The use of secondary data enables a researcher to verify the findings bases on primary data. It readily meets the need for additional empirical support. The researcher need not wait the time when additional primary data can be collected.

Disadvantages of Secondary Data The use of a secondary data has its own limitations. The most important limitation is the available data may not meet our specific needs. The definitions adopted by those who collected those data may be different; units of measure may not match; and time periods may also be different. The available data may not be as accurate as desired. To assess their accuracy we need to know how the data were collected. The secondary data are not up-to-date and become obsolete when they appear in print, because of time lag in producing them. For example, population census data are published two or three years later after compilation and no new figures will be available for another ten years. Finally, information about the whereabouts of sources may not be available to all social scientists. Even if the location of the source is known, the accessibility depends primarily on proximity. For example, most of the unpublished official records and compilations are located in the capital city, and they are not within the easy reach of researchers based in far off places.

Q-5. Differentiate between Schedules and Questionnaire. What are the alternative modes of sending Questionnaires? Schedule in Research Methodology A schedule is a structure of set of questions on a given topic which are asked by the interviewer or investigator personally. The order of questions, the language of the questions and the arrangement of parts of the schedule are not changed. However, the investigator can explain the questions if the respondent faces any difficulty. It contains direct questions as well as questions in tabular form. Schedule include open-ended questions and close-ended questions. Open-ended questions allow the respondent considerable freedom in answering. However, questions are answered in details. Close-ended questions has to be answered by the respondent by choosing an answer from the set of answers given under a question just by ticking. Following are the different types of schedules used by social scientists and anthropologists. Village or community schedule: It is used by census researchers who collect general information on populations, occupations, etc.

Family or Household schedule: It gives full demographic details of households, the status of individuals, data on education, age, family relations etc. Opinion or attitude schedule: To schedule the views of population regarding an issue. Questionnaire in Research Methodology A questionnaire refers to a device for securing answers to questions by using a form which the respondent fills in by himself. It consists of a number of questions printed or typed in a definite order. These forms are actually mailed to the respondent who was expected to read and understand the questions and reply to them by writing the relevant answers in the spaces provided. Ideally speaking respondent must answer to a verbal stimulus and give a written or verbal response. It is totally devoid of any table. Its purpose is to collect information from the respondents who are scattered over a vast area. Questionnaire include open-ended questions and close-ended questions. Open-ended questions allow the respondent considerable freedom in answering. However, questions

are answered in details. Close-ended questions has to be answered by the respondent by choosing an answer from the set of answers given under a question just by ticking. Following are the different types of Questionnaire used by social scientists and anthropologists. Structured questionnaire: It include definite, concrete and pre-obtained questions which were prepared in advance.

Closed-form questionnaire: It is used when categorized data is required.

Pictorial questionnaire: It is used to promote interest in answering after seeing the pictures on a particular theme. Unstructured questionnaire: Designed to obtained view points, opinions, attitudes and to show relationships and inter-connections between data which might escape notice under more mechanical types of interrogations. A schedule however, takes more time as compared to a questionnaire. A questionnaire has less data collecting ability than a schedule. A questionnaire can cover a very wide field of data whereas a schedule is a problem oriented data collecting method. A questionnaire take for itself and is self-explanatory, whereas schedule has to be explained by the investigator. DIFFERENCE BETWEEN QUESTIONNAIRES AND SCHEDULES Both questionnaire and schedule are popularly used methods of collecting data in research surveys. There is much resemblance in the nature of these two methods and this fact has made many people to remark that from a practical point of view, the two methods can be taken to be the same. But from the technical point of view there is difference between the two.

1. Questionnaire is generally sent through 1. Schedules are generally filled by the mail to informants to be answered. research worker or enumerator, who can interpret the questions when necessary. 2. Data collection is cheap. 2. Data collection is more expensive as money is spent on enumerators. 3. Non response is usually high as many 3. Non response is very low because this people do not respond. is filled by enumerators. 4. It is not clear that who replies. 4. Identity of respondent is known. 5. The questionnaire method is likely to be 5. Information is collected well in time. very slow since many respondents do not return the questionnaire. 6. No personal contact is possible in case 6. Direct personal contact is established.

of questionnaire

Alternative Modes of Sending Questionnaires There are some alternative methods of distributing questionnaires to the respondents. They are:(1) personal delivery, (2) attaching questionnaire to a product (3) advertising questionnaire in a newspaper of magazine, and (4) news stand insets. Personal Delivery: The researcher or his assistant may deliver the questionnaires to the potential respondents with a request to complete them at their convenience. After a day or two he can collect the completed questionnaires from them. Often referred to as the selfadministered questionnaire method, it combines the advantages of the personal interview and the mail survey. Alternatively, the questionnaires may be delivered in person and the completed questionnaires may be returned by mail by the respondents. Attaching Questionnaire to a Product A firm test marketing a product may attach a questionnaire to a product and request the buyer to complete it and mail it back to the firm. The respondent is usually rewarded by a gift or a discount coupon. Advertising the Questionnaires The questionnaire with the instructions for completion may be advertised on a page of magazine or in section of newspapers. The potential respondent completes it tears it out and mails it to the advertiser. For example, the committee of Banks customer services used this method. Management studies for collecting information from the customers of commercial banks in India. This method may be useful for large-scale on topics of common interest.

Q-6. Explain the various steps in processing of Data. Answer: The various steps in processing of data may be stated as: Identifying the data structures Editing the data Coding and classifying the data Transcription of data Tabulation of data.

I - Checking for Analysis In the data preparation step, the data are prepared in a data format, which allows the analyst to use modern analysis software such as SAS or SPSS. The major criterion in this is to define the data structure. A data structure is a dynamic collection of related variables and can be conveniently represented as a graph where nodes relabeled by variables. The data structure also defines and stages of the preliminary relationship between variables/groups that have been pre-planned by the researcher. Most data structures can be graphically presented to give clarity as to the frames researched hypothesis. A sample structure could be a linear structure, in which one variable leads to the other and finally, to the resultant end variable. The identification of the nodal points and the relationships among the nodes could sometimes be a complex task than estimated. When the task is complex, which involves several types of instruments being collected for the same research question, the procedures for drawing the data structure would involve a series of steps. In several intermediate steps, the heterogeneous data structure of the individual data sets can be harmonized to a common standard and the separate data sets are then integrated into a single data set. However, the clear definition of such data structures would help in the further processing of data. II -Editing The next step in the processing of data is editing of the data instruments. Editing is a process of checking to detect and correct errors and omissions. Data editing happens at

two stages, one at the time of recording of the data and second at the time of analysis of data. -Data Editing at the Time of Recording of Data Document editing and testing of the data at the time of data recording is done considering the following questions in mind. Do the filters agree or are the data inconsistent? Have missing values been set to values, which are the same for all research questions? Have variable descriptions been specified? Have labels for variable names and value labels been defined and written? All editing and cleaning steps are documented, so that, the redefinition of variables or later analytical modification requirements could be easily incorporated intothe data sets. III - Data Editing at the Time of Analysis of Data Data editing is also a requisite before the analysis of data is carried out. This ensures that the data is complete in all respect for subjecting them to further analysis. Some of the usual check list questions that can be had by a researcher for editing data sets before analysis would be: Is the coding frame complete?
Is the documentary material sufficient for the methodological description of the

study? Is the storage medium readable and reliable. Has the correct data set been framed? Is the number of cases correct? Are there differences between questionnaire, coding frame and data? Are there undefined and so-called wild codes?
Comparison of the first counting of the data with the original documents of there

searcher. The editing step checks for the completeness, accuracy and uniformity of the data as created by the researcher. IV - Completeness: The first step of editing is to check whether there is an answer to all the questions/variables set out in the data set. If there were any omission, the researcher sometimes would be able to deduce the correct answer from other related data on the

same instrument. If this is possible, the data set has to rewritten on the basis of the new information. For example, the approximate family income can be inferred from other answers to probes such as occupation of family members, sources of income, approximate spending and saving and borrowing habits of family members etc. If the information is vital and has been found to be incomplete, then the researcher can take the step of contacting the respondent personally again and solicit the requisite data again. If none of these steps could be resorted to the marking of the data as missing must be resorted to. V - Accuracy: Apart from checking for omissions, the accuracy of each recorded answer should be checked. A random check process can be applied to trace the errors at this step. Consistency in response can also be checked at this step. The cross verification to a few related responses would help in checking for consistency in responses. The reliability of the data set would heavily depend on this step of error correction. While clear inconsistencies should be rectified in the data sets, fact responses should be dropped from the data sets. VI - Uniformity: In editing data sets, another keen lookout should be for any lack of uniformity, in interpretation of questions and instructions by the data recorders. For instance, the responses towards a specific feeling could have been queried from a positive as well as a negative angle. While interpreting the answers, care should be taken as are cord the answer as a positive question response or as negative question response in all uniformity checks for consistency in coding throughout the questionnaire/interview schedule response/data set. The final point in the editing of data set is to maintain a log of all corrections that have been carried out at this stage. The documentation of these corrections helps the researcher to retain the original data set. VII - Coding The edited data are then subject to codification and classification. Coding process assigns numerals or other symbols to the several responses of the data set. It is therefore a pre-requisite to prepare a coding scheme for the data set. The recording of the data is done on the basis of this coding scheme. The responses collected in a data sheet varies, sometimes the responses could be the choice among a multiple response, sometimes the response could be in terms of values and sometimes the response could be alphanumeric. At the recording stage itself, if some codification were done to the responses collected, it would be useful in the data analysis. When codification is done, it is imperative to keep a log of the codes allotted to the observations. This code sheet will help in the identification of variables/observations and the basis for such codification.

The first coding done to primary data sets are the individual observation themselves. This responses sheet coding gives a benefit to the research, in that, the verification and editing of recordings and further contact with respondents can be achieved without any difficulty. The codification can be made at the time of distribution of the primary data sheets itself. The codes can be alphanumeric to keep track of where and to whom it had been sent. For instance, if the data consists of several public at different localities, the sheets that are distributed in a specific locality may carry a unique part code which is alphabetic. To this alphabetic code, numeric code can be attached to distinguish the person to whom the primary instrument was distributed. This also helps the researcher to keep track of who the respondents are and who are the probable respondents from whom primary datasheets are yet to be collected. Even at a latter stage, any specific queries on a specific responses sheet can be clarified. The variables or observations in the primary instrument would also need codification, especially when they are categorized. The categorization could be on a scale i.e., most preferable to not preferable, or it could be very specific such as Gender classified as Male and Female. Certain classifications can lead to open ended classification such as education classification, Illiterate, Graduate, Professional, Others. Please specify. In such instances, the codification needs to be carefully done to include all possible responses under others, please specify. If the preparation of the exhaustive list is not feasible, then it will be better to create a separate variable for the others please specify category and records all responses as such. Numeric Coding: Coding need not necessarily be numeric. It can also be alphabetic. Coding has to be compulsorily numeric, when the variable is subject to further parametric analysis. Alphabetic Coding: A mere tabulation or frequency count or graphical representation of the variable may be given in an alphabetic coding. Zero Coding: A coding of zero has to be assigned carefully to a variable. In many instances, when manual analysis is done, a code of 0 would imply a no response from the respondents. Hence, if a value of 0 is to be given to specific responses in the data sheet, it should not lead to the same interpretation of non response. For instance, there will be a tendency to give a code of 0 to a no, then a different coding than 0 should be given in the data sheet. An illustration of the coding process of some of the demographic variables is given in the following table. VIII - Classification

When open ended responses have been received, classification is necessary to code the responses. For instance, the income of the respondent could be an open-ended question. From all responses, a suitable classification can be arrived at. A classification method should meet certain requirements or should be guided by certain rules. First, classification should be linked to the theory and the aim of the particular study. The objectives of the study will determine the dimensions chosen for coding. The categorization should meet the information required to test the hypothesis or investigate the questions. Second, the scheme of classification should be exhaustive. That is, there must be a category for every response. For example, the classification of martial status into three category viz., married Single and divorced is not exhaustive, because responses like widower or separated cannot be fitted into the scheme. Here, an open ended question will be the best mode of getting the responses. From the responses collected, the researcher can fit a meaningful and theoretically supportive classification. The inclusion of the classification Others tends to fill the cluttered, but few responses from the data sheets. But others categorization has to carefully used by the researcher. However, the other categorization tends to defeat the very purpose of classification, which is designed to distinguish between observations in terms of the properties under study. The classification others will be very useful when a minority of respondents in the data set give varying answers. For instance, the reading habits of newspaper may be surveyed. The 95 respondents out of 100 could be easily classified into 5 large reading groups while 5 respondents could have given a unique answer. These given answer rather than being separately considered could be clubbed under the others heading for meaningful interpretation of respondents and reading habits. Third, the categories must also be mutually exhaustive, so that each case is classified only once. This requirement is violated when some of the categories overlap or different dimensions are mixed up. The number of categorization for a specific question/observation at the coding stage should be maximum permissible since, reducing the categorization at the analysis level would be easier than splitting an already classified group of responses. However the number of categories is limited by the number of cases and the anticipated statistical analysis that are to be used on the observation. XI - Transcription of Data When the observations collected by the researcher are not very large, the simple inferences, which can be drawn from the observations, can be transferred to a data sheet, which is a summary of all responses on all observations from a research instrument. The main aim of transition is to minimize the shuffling proceeds between several responses and several observations. Suppose a research instrument contains 120 responses and the observations has been collected from 200 respondents, a simple summary of one response from all 200 observations would require shuffling of 200 pages. The process is quite tedious if several summary tables are to be prepared

from the instrument. The transcription process helps in the presentation of all responses and observations on data sheets which can help the researcher to arrive at preliminary conclusions as to the nature of the sample collected etc.Transcription is hence, an intermediary process between data coding and data tabulation. Methods of Transcription The researcher may adopt a manual or computerized transcription. Long work sheets, sorting cards or sorting strips could be used by the researcher to manually transcript the responses. The computerized transcription could be done using a data base package such as spreadsheets, text files or other databases. The main requisite for a transcription process is the preparation of the data sheets where observations are the row of the database and the responses/variables are the columns of the data sheet. Each variable should be given a label so that long questions can be covered under the label names. The label names are thus the links to specific questions in the research instrument. For instance, opinion on consumer satisfaction could be identified through a number of statements (say 10); the data sheet does not contain the details of the statement, but gives a link to the question in the research instrument though variable labels. In this instance the variable names could be given asCS1, CS2, CS3, CS4, CS5, CS6, CS7, CS8, CS9 and CS10. The label CS indicating Consumer satisfaction and the number 1 to 10 indicate the statement measuring consumer satisfaction. Once the labeling process has been done for all the responses in the research instrument, the transcription of the response is done. -Manual Transcription When the sample size is manageable, the researcher need not use any computerization process to analyze the data. The researcher could prefer a manual transcription and analysis of responses. The choice of manual transcription would be when the number of responses in a research instrument is very less, say 10 responses, and the numbers of observations collected are within 100. A transcription sheet with 100x50 (assuming each response has 5options) row/column can be easily managed by a researcher manually. If, onthe other hand the variables in the research instrument are more than 40 andeach variable has 5 options, it leads to a worksheet of 100x200 sizes which might not be easily managed by the researcher manually. In the second instance, if the number of responses is less than 30, then the manual worksheet could be attempted manually. In all other instances, it is advisable to use a computerized transcription process. -Long Worksheets

Long worksheets require quality paper; preferably chart sheets, thick enough to last several usages. These worksheets normally are ruled both horizontally and vertically, allowing responses to be written in the boxes. If one sheet is not sufficient, the researcher may use multiple rules sheets to accommodate all the observations. Heading of responses which are variable names and their coding (options) are filled in the first two rows. The first column contains the code of observations. For each variable, now the responses from the research instrument are then transferred to the worksheet by ticking the specific option that the observer has chosen. If the variable cannot be coded into categories, requisite length for recording the actual response of the observer should be provided for in the work sheet. The worksheet can then be used for preparing the summary tables or can be subjected to further analysis of data. The original research instrument can be now kept aside as safe documents. Copies of the data sheets can also bekept for future references. As has been discussed under the editing section, the transcript data has to be subjected to a testing to ensure error free transcription of data. Transcription can be made as and when the edited instrument is ready for processing. Once all schedules/questionnaires have been transcribed, the frequency tables can be constructed straight from worksheet. Other methods of manual transcription include adoption of sorting strips or cards. In olden days, data entry and processing were made through mechanical and semi auto-metric devices such as key punch using punch cards. The arrival of computers has changed the data processing methodology altogether. X - Tabulation The transcription of data can be used to summarize and arrange the data in compact form for further analysis. The process is called tabulation.

Anda mungkin juga menyukai