Anda di halaman 1dari 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

MB0050 Research Methodology


Semester: 3 - Assignment Set: 1 1. a. Differentiate between nominal, ordinal, interval and ratio scales, with an example of each. Answer. The "levels of measurement", or scales of measure are expressions that typically refer to the theory of scale types developed by the psychologist Stanley Smith Stevens. Stevens proposed his theory in a 1946 Science article titled "On the theory of scales of measurement" In that article, Stevens claimed that all measurement in science was conducted using four different types of scales that he called "nominal", "ordinal", "interval" and "ratio". The theory of scale types Stevens (1946, 1951) proposed that measurements can be classified into four different types of scales. These are shown in the table below as: nominal, ordinal, interval, and ratio. Scale Type nominal (also denoted as categorical) ordinal interval ratio Permissible Statistics mode, Chi-square median, percentile mean, standard deviation, correlation, regression, analysis of variance All statistics permitted for interval scales plus the following: geometric mean, harmonic mean, coefficient of variation, logarithms Admissible Scale Transformation One to One (equality (=)) Monotonic increasing (order (<)) Positive linear (affine) Positive similarities (multiplication) Mathematical structure standard set structure (unordered) totally ordered set affine line field

Nominal scale At the nominal scale, i.e., for a nominal category, one uses labels; for example, rocks can be generally categorized as igneous, sedimentary and metamorphic. For this scale, some valid operations are equivalence and set membership. Nominal measures offer names or labels for certain characteristics. Variables assessed on a nominal scale are called categorical variables; see also categorical data.

Chhavi

Reg. No. 521038725

Page 1 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

Stevens (1946, p. 679) must have known that claiming nominal scales to measure obviously nonquantitative things would have attracted criticism, so he invoked his theory of measurement to justify nominal scales as measurement: the use of numerals as names for classes is an example of the assignment of numerals according to rule. The rule is: Do not assign the same numeral to different classes or different numerals to the same class. Beyond that, anything goes with the nominal scale.

The central tendency of a nominal attribute is given by its mode; neither the mean nor the median can be defined. We can use a simple example of a nominal category: first names. Looking at nearby people, we might find one or more of them named Aamir. Aamir is their label; and the set of all first names is a nominal scale. We can only check whether two people have the same name (equivalence) or whether a given name is in on a certain list of names (set membership), but it is impossible to say which name is greater or less than another (comparison) or to measure the difference between two names. Given a set of people, we can describe the set by its most common name (the mode), but cannot provide an "average name" or even the "middle name" among all the names. However, if we decide to sort our names alphabetically (or to sort them by length; or by how many times they appear in the US Census), we will begin to turn this nominal scale into an ordinal scale. Ordinal scale Rank-ordering data simply puts the data on an ordinal scale. Ordinal measurements describe order, but not relative size or degree of difference between the items measured. In this scale type, the numbers assigned to objects or events represent the rank order (1st, 2nd, 3rd, etc.) of the entities assessed. A Likert Scale is a type of ordinal scale and may also use names with an order such as: "bad", "medium", and "good"; or "very satisfied", "satisfied", "neutral", "unsatisfied", "very unsatisfied." An example of an ordinal scale is the result of a horse race, which says only which horses arrived first, second, or third but include no information about race times. Another is the Mohs scale of mineral hardness, which characterizes the hardness of various minerals through the ability of a harder material to scratch a softer one, saying nothing about the actual hardness of any of them. Yet another example is military ranks; they have an order, but no well-defined numerical difference between ranks. When using an ordinal scale, the central tendency of a group of items can be described by using the group's mode (or most common item) or its median (the middle-ranked item), but the mean (or average) cannot be defined. In 1946, Stevens observed that psychological measurement usually operates on ordinal scales, and that ordinary statistics like means and standard deviations do not have valid interpretations. Nevertheless, such statistics can often be used to generate fruitful information, with the caveat that caution should be taken in drawing conclusion from such statistical data.

Chhavi

Reg. No. 521038725

Page 2 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

Psychometricians like to theorise that psychometric tests produce interval scale measures of cognitive abilities (e.g. Lord & Novick, 1968; von Eye, 2005) but there is little prima facie evidence to suggest that such attributes are anything more than ordinal for most psychological data (Cliff, 1996; Cliff & Keats, 2003; Michell, 2008). In particular IQ scores reflect an ordinal scale, in which all scores are only meaningful for comparison, rather than an interval scale, in which a given number of IQ "points" corresponds to a unit of intelligence Thus it is an error to write that an IQ of 160 is just as different from an IQ of 130 as an IQ of 100 is different from an IQ of 70. In mathematical order theory, an ordinal scale defines a total preorder of objects (in essence, a way of sorting all the objects, in which some may be tied). The scale values themselves (such as labels like "great", "good", and "bad"; 1st, 2nd, and 3rd) have a total order, where they may be sorted into a single line with no ambiguities. If numbers are used to define the scale, they remain correct even if they are transformed by any monotonically increasing function. This property is known as the order isomorphism. A simple example follows: Judge's score Score minus 8 Tripled score Cubed score x x-8 3x x3 10 2 30 1000 9 1 27 729 8.5 0.5 25.5 614.125 8 0 24 512 5 -3 15 125

Alice's cooking ability Bob's cooking ability Claire's cooking ability Dana's cooking ability Edgar's cooking ability

Since x-8, 3x, and x3 are all monotonically increasing functions, replacing the ordinal judge's score by any of these alternate scores does not affect the relative ranking of the five people's cooking abilities. Each column of numbers is an equally legitimate ordinal scale for describing their abilities. However, the numerical (additive) difference between the various ordinal scores has no particular meaning. Interval scale Quantitative attributes are all measurable on interval scales, as any difference between the levels of an attribute can be multiplied by any real number to exceed or equal another difference. A highly familiar example of interval scale measurement is temperature with the Celsius scale. In this particular scale, the unit of measurement is 1/100 of the temperature difference between the freezing and boiling points of water under a pressure of 1 atmosphere. The "zero point" on an interval scale is arbitrary; and negative values can be used. The formal mathematical term is an affine space (in this case an affine line). Variables measured at the interval level are called "interval variables" or sometimes "scaled variables" as they have units of measurement. Ratios between numbers on the scale are not meaningful, so operations such as multiplication and division cannot be carried out directly. But ratios of differences can be expressed; for example, one difference can be twice another.

Chhavi

Reg. No. 521038725

Page 3 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

The central tendency of a variable measured at the interval level can be represented by its mode, its median, or its arithmetic mean. Statistical dispersion can be measured in most of the usual ways, which just involved differences or averaging, such as range, interquartile range, and standard deviation. Since one cannot divide, one cannot define measures that require a ratio, such as studentized range or coefficient of variation. More subtly, while one can define moments about the origin, only central moments are useful, since the choice of origin is arbitrary and not meaningful. One can define standardized moments, since ratios of differences are meaningful, but one cannot define coefficient of variation, since the mean is a moment about the origin, unlike the standard deviation, which is (the square root of) a central moment. b. What are the purposes of measurement in social science research? Answer. Three Purposes of Research Social research can serve a variety of purposes. Three of the most influential and common purposes of research are exploration, description and explanation. Exploration involves familiarizing a researcher with a topic. Exploration satisfies the researcher's curiosity and desire for improved understanding. Exploration tests the feasibility of undertaking a more extensive study. Exploration helps develop the methods that will be used in a study. Description involves describing situations and events through scientific observation. Scientific descriptions are typically more accurate and precise than causal ones. For example, the U. S. Census uses descriptive social research in its examination of characteristics of the U. S. population. Explanation involves answering the questions of what, where, when, and how. Explanatory studies answer questions of why. For example, an explanatory analysis of the 2002 General Social Survey (GSS) data indicates that 38 percent of men and 30 percent of women said marijuana should be legalized, while 55 percent of liberals and 27 percent of conservatives said the same. Given these statistics, you could start to develop an explanation for attitudes toward marijuana legalization. In addition, further study of gender and political orientation could lead to a deeper explanation of this issue. The Logic of Idiographic vs. Nomothetic Explanation

Idiographic explanation - a "full", detailed, in-depth understanding of a case; for practical reasons, only a few subjects are studied in this way. An idiographic explanation of the marijuana legalization survey would involve a more conclusive list of factors that could influence a person's viewpoints on this issue. Therefore, an idiographic explanation would need to consider several factors, such as information from parents and previous experiences, not just political orientation. Nomothetic explanation - a generalized understanding of a given case, with the goal of finding new factors that can account for many of the variations in a given phenomenon; is

Chhavi

Reg. No. 521038725

Page 4 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

applicable to many subjects. Regarding the survey mentioned above dealing with people's stances on marijuana legalization, a nomothetic explanation may simply suggest that political orientation is the main driving force behind people's differing opinions on this issue. Hypotheses are not required in nomothetic research.
o

There are three main criteria for nomothetic causal relationships in social research:

1) the variables must be correlated 2) the variables are nonspurious 3) the cause takes place before the effect Correlation - an empirical relationship between two variables such that changes in one are associated with changes in the other, or particular attributes in one are associated with particular attributes in the other.

Spurious relationship - a coincidental statistical correlation between two variables shown to be caused by some third variable. For example, increased ice cream consumption is related to the crime rate rise. But this relationship is caused by a third variable, summertime yielding hot weather and closed schools. Therefore, for a causal relationship, variables must be nonspurious False criteria for nomothetic causality: Complete causation - proper nomothetic explanation is probabilistic and does not explain every single case. Exceptional cases - exceptions do not disprove nomothetic explanation. Majority of cases - nomothetic explanation may be applicable to only a minority of cases in a given situation.

Necessary and Sufficient Causes: A necessary cause represents a condition that must be present for the effect to follow. Example: It is necessary for you to take college courses in order to get a degree. Take away the courses, and the degree never follows. ***A sufficient cause represents a condition that guarantees the effect if it is present. Example: Skipping an exam would be a sufficient cause for failing it (even though there are other ways to fail it). 2. a. What are the sources from which one may be able to identify research problems?

Answer.

SOURCES OF PROBLEMS

1. Curiosity One of the oldest and most common sources of research problems is curiosity. Just as your interest in baseball or gardening may stimulate you to investigate the topic in greater depth, so researchers may investigate phenomena that attract their personal interest. For example, a man named Lipset

Chhavi

Reg. No. 521038725

Page 5 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

belonged to the International Typographical Union, and his son's well-known study of democratic decision-making processes was specifically concerned with that organization. The younger Lipset was curious because the union was very large and, according to contemporary social thinking, should have succumbed to an oligarchic decision-making process (a rule of the many by the few), yet appeared to be very democratic. 2. Significant Others While personal interests or curiosity may be key motivating factors in problem selection, it's a basic assumption in the social sciences that individuals learn from and are influenced by others. Because researchers are usually recruited and trained in universities, the selection of research topics often reflects the influence of teachers or fellow students. For example, the first book of Carlos Castaneda's adventures with Don Juan (The Teachings of Don Juan) served as his master's thesis, while the third (Journey to Ixtlan) was his Ph.D. dissertation in anthropology. A reading of his advisor's writings suggests that the character of Don Juan is based, at least in part, on Castaneda's advisor, Harold Garfinkel. Garfinkel's notion that people actually construct social reality instead of merely reacting to it is a major theme in Castaneda's works. This type of influence may also be les personal, as when one is influenced by the writings of past theorists. 3. Social Problems In addition to personal curiosity and the influence of others, concern with social problems has been a major source of social research. As discussed in more detail in Chapter 4, one of the first uses of social surveys was to study poverty. Likewise, concerns with the Nazi extermination of 6 million Jews and discrimination in this country have generated a vast body of psychological and socialpsychological research on ethnic, gender, sexual orientation, and racial prejudice. Concern with the energy crisis and the threat of overpopulation have generated volumes of research. Just as researchers' interests vary, so they also differ in what they perceive to be a social problem. Karl Marx's view of capitalism as a social problem is not shared by all social scientists. Likewise one may view the feminist movement as a social problem, or one might see sexual orientation stereotyping and prejudice to be problems. Still, this interest in social problems suggests that researchers are concerned not only with knowledge for knowledge's sake, but also with its application, or with what in Chapter 1 we called intelligent intervention. Regardless of precipitating factors, one of the basic goals of research is to increase our knowledge about a particular phenomenon. For the scientist this means building a body of theory. As you read in the previous chapter, knowing is a matter of providing a description of reality. While we could conceivably call any description of reality a "theory," the term is generally used more rigorously. When a statement that links together two or more concepts is widely agreed upon, it is given the status of a proposition. A theory consists of a set of propositions that are systematically interrelated and purport to explain some phenomenon. Probably most research is conducted in an effort to test propositions. The research begins with a theory and uses empirical evidence to find out how well that description actually fits reality. If

Chhavi

Reg. No. 521038725

Page 6 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

amenable to observation, either the basic propositions or deduced propositions may be tested against empirical data. In testing theory, the goal is not merely to find support for one theory, but also to eliminate less likely theories. Thus, Durkheim began his study of suicide by attempting to show that previous theories of suicide could not be true. Durkheim first tried to demonstrate that suicide is not a result of mental illness or psychopathic states by showing that rates of suicide and insanity are not related. In the same manner he attempted to discredit theories that attributed the causes of suicide to race, heredity, climate, seasonal temperatures, and imitation before defending his theory of suicide in terms of social forces. Research also has the dual role of both specifying and generalizing the domain of the theory. For example, a theory about social power may state that power underlies all organizational relationships. A more specific theoretical statement could be that while people at all organizational levels hold certain kinds of power, those in higher-status positions always have more power than those in lowerstatus positions. One might also attempt to expand the relationship between power and organizational hierarchies to include other social hierarchies. In this case it might be stated that lawyers have more power than clients or doctors have more power than patients. Naturally, specification and generalization of theory are major concerns because we strive to explain as many phenomena as possible. b. Why literature survey is important in research? Ans. Research is made in order to inform people with new knowledge or discovery. However, it is not to be expected that everybody would willingly believe what you are tackling in your whole research paper. Thus, what you can do to make your research more credible will be to support them with other works which have spoken about the same topic that you have for your research. This is where literature review comes in. You can even have literature sources in works such as stories, comments, project, speech, article, novel, poem, essay, program, theory, and others. This is why literature review involves scanning the pages of any published literature like books, newspaper, magazine, website, webpage, collection, paper, pamphlet, and the like where you may be able to find any reference to the same topic that you are researching on. This time, literature does not exclusively refer to the poetic rendition of words, like that of Shakespeare alone. There are many reasons why literature review is rendered as a significant part of any research or dissertation paper. You may ask what makes it as such if it is only supposed to contain tidbits of other related works. Literature review is the part of the paper where the researcher will be given the opportunity to strengthen your paper for you will be citing what other reliable authors have said about your topic. This will prove that you are not just writing about any random subject but that many others have also poured their thoughts on the topic. You may also ask what makes literature review a necessary part of the paper. This question can be

Chhavi

Reg. No. 521038725

Page 7 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

answered by trying not to include the review in your paper. Obviously, it affects the length of your paper but this is not the noticeable part. What would most certainly be lacking is the fact that your paper, without the literature review, only contains all of your opinions about the facts that you have discovered through your research. Thus, how can you further convince the readers, in this case, the committee who will scrutinize your paper? This is the need that is answered only by the literature review. By the mere fact that you are using referencing by citing what more credible people had said about the topic will build a stronger foundation for your paper. With a literature review, you need to establish a clear tie between the works that you have cited and the topic that you are writing about. You should be able to justify the inclusion of a certain work in your review so as to make everything that you have written useful. The more you include useless points in your paper, the more that the committee will think that you have not put in a lot of thinking into your paper. Literature review is also unique from the rest of the paper. While you have to fill most of the paper with your own analysis, in a literature review alone, you will have to write purely about related works of other people. 3. a. What are the characteristics of a good research design? Ans. Characteristics of Research Design

Generally a good research design minimizes bias and maximizes the reliability of the data collected and analyzed. The design which gives the smallest experimental error is reported to be the best design in scientific investigation. Similarly, a design which yields maximum information and provides a opportunity for considering different aspects of a problem is considered to be the most appropriate efficient design . Thus the question of a good design is related to the purpose or objective of the research problem and also with the nature of the problem to be studied. A good research design should satisfy the following four conditions namely objectivity, reliability, validity and generalization of the findings. 1. Objectivity: It refers to the findings related to the method of data collection and scoring of the responses. The research design should permit the measuring instrument which are fairly objective in which every observer or judge scoring the performance must precisely give the same report. In other words, the objectivity of the procedure may be judged by the degree of agreement between the final scores assigned to different individuals by more than one independent observer. This ensures the objectivity of the collected data which shall be capable of analysis and drawing generalizations. 2. Reliability: Reliability refers to consistency through out a series of measurements. For eg: if a respondent gives out a response to a particular item, he is expected to give the same response to that item even if he is asked repeatedly. If he is changing his response to the same item, the consistency

Chhavi

Reg. No. 521038725

Page 8 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

will be lost. So the researcher should frame the items in a questionnaire in such a way that it provides consistency or reliability. 3. Validity: Any measuring device or instrument is said to be valid when it measures what it is expected to measure. For eg: an intelligence test concucted for measuring the I.Q should measure only the intelligence and nothing else, and the questionnaire shall be framed accordingly. 4. Generalizability: It means how best the data collected from the samples can be utilized for drawing certain generalisations applicable to a large group from which sample is drawn. Thus a research design helps an investigator to generalize his findings provided he has taken due care in defining the population, selecting the sample, deriving appropriate statistical analysis etc. while preparing the research design. Thus a good research design is one which is methodologically prepared and should ensure that: a) The measuring instrument can yield objective, reliable and valid data. b) The population is clearly defined. c) Most appropriate techniques of sample selection is used to form an appropriate sample. d) Appropriate statical analysis has been carried out, and e) The findings of the study is capable of generalisations. b. What are the components of a research design? Ans. Working Design - preliminary plan for beginning a qualitative research project. ! ! ! ! subjects to be studied sites to be studied time frame for data collection possible variables to be considered purposeful sampling - nonrandom sampling design that selects subjects or sites due to specific characteristics or phenomena under study Working Hypotheses - hypotheses regarding possible outcomes that guide the research study (also see foreshadowed problems) Data Collection - forms of data collection vary widely depending on the qualitative research design employed. ! the researcher must be able to gain access to the situation ! the researcher must decide on his/her role in data collection ! participant-observer or observer only ! interactive or noninteractive data collection ! the researcher must decide upon the format for data collection ! interviews ! observations

Chhavi

Reg. No. 521038725

Page 9 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

! ! ! ! ! !

oral histories specimen records document collection & review tape recording &/or video recording the researcher must employ some system for managing great volume of data collected reflective journal/log - record of events and personal thoughts as data collection proceeds--may provide basis for changes in working design or reformulation of hypotheses Specimen record - a narrative description of one person in a natural situation as seen by skilled observers over a substantial period of time. Record stream of behavior divide stream into units analyze units

Oral History - interviews conducted with the use of a tape recorder. ! entire conversation recorded ! allows examination of inflections ! emphasize open-ended questions ! analyzed through listening to tape rather than transcribing it Data Analysis and Interpretation Data Analysis - begins soon after data collection begins to allow verification, refinement, or restatement of working hypotheses/ foreshadowed problems/or initial theories. | qualitative data analysis is a series of successive approximations toward an accurate description and interpretation of phenomena under study Coding - process of data reduction through data organization--allows researcher to "see what's there" for purposes of: ! sorting/categorization ! comparison with initial hypotheses/problem statements (support or contradict earlier ideas) ! refinement of design/hypotheses Characteristics of a coding system: ! able to accurately capture information relevant to the research problem ! information captured is useful in describing and understanding the phenomenon being studied Coding categories: ! are influenced by the purpose and context of the study ! are specific to the study

Chhavi

Reg. No. 521038725

Page 10 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

! may be determined before or after data collection and review ! need not be mutually exclusive (e.g. student & teacher perceptions/student demographic characteristics/etc) Possible Codes: ! Setting/Context Codes - reflect the context or setting in which phenomena under study occur (e.g. several different settings-high school/vocational school/etc.included under school environment) ! Process Codes - focus on the sequence of events and how changes occur during the study (e.g. different ways in which students go about dropping out of school)

4. a. Distinguish between Doubles sampling and multiphase sampling. Ans. Double Sampling Plans:

How double sampling plans work? Double and multiple sampling plans were invented to give a questionable lot another chance. For example, if in double sampling the results of the first sample are not conclusive with regard to accepting or rejecting, a second sample is taken. Application of double sampling requires that a first sample of size n1 is taken at random from the (large) lot. The number of defectives is then counted and compared to the first sample's acceptance number a1 and rejection number r1. Denote the number of defectives in sample 1 by d1 and in sample 2 by d2, then: If d1 a1, the lot is accepted. If d1 r1, the lot is rejected. If a1 < d1 < r1, a second sample is taken. If a second sample of size n2 is taken, the number of defectives, d2, is counted. The total number of defectives is D2 = d1 + d2. Now this is compared to the acceptance number a2 and the rejection number r2 of sample 2. In double sampling, r2 = a2 + 1 to ensure a decision on the sample. If D2 a2, the lot is accepted. If D2 r2, the lot is rejected. Design of a double sampling plan? The parameters required to construct the OC curve are similar to the single sample case. The two points of interest are (p1, 1- ) and (p2, , where p1 is the lot fraction defective for plan 1 and p2 is the lot fraction defective for plan 2. As far as the respective sample sizes are concerned, the second sample size must be equal to, or an even multiple of, the first sample size. There exist a variety of tables that assist the user in constructing double and multiple sampling plans. The index to these tables is the p2/p1 ratio, where p2 > p1. One set of tables, taken from the Army Chemical Corps Engineering Agency for = .05 and = .10, is given below:

Chhavi

Reg. No. 521038725

Page 11 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

R= p2/p1

accept numbers c1 0 1 0 1 2 1 2 3 2 3 4 4 5 5 5 5 5

Tables for n1 = n2 approximation of pn1 c2 P = .95 1 2 2 3 4 4 5 6 6 7 8 9 11 12 13 14 16 0.21 0.52 0.43 0.76 1.16 1.04 1.43 1.87 1.72 2.15 2.62 2.90 3.68 4.00 4.35 4.70 5.39

values for P = .10 2.50 3.92 2.96 4.11 5.39 4.42 5.55 6.78 5.82 6.91 8.10 8.26 9.56 9.77 10.08 10.45 11.41

11.90 7.54 6.79 5.39 4.65 4.25 3.88 3.63 3.38 3.21 3.09 2.85 2.60 2.44 2.32 2.22 2.12 Multiple sampling:

Multiple Sampling is an extension of the double sampling concept Multiple sampling is an extension of double sampling. It involves inspection of 1 to k successive samples as required to reach an ultimate decision. Mil-Std 105D suggests k = 7 is a good number. Multiple sampling plans are usually presented in tabular form: Procedure for multiple sampling? The procedure commences with taking a random sample of size n1from a large lot of size N and counting the number of defectives, d1. if d1 a1 the lot is accepted. if d1 r1 the lot is rejected. if a1 < d1 < r1, another sample is taken. If subsequent samples are required, the first sample procedure is repeated sample by sample. For each sample, the total number of defectives found at any stage, say stage i, is

Chhavi

Reg. No. 521038725

Page 12 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

This is compared with the acceptance number ai and the rejection number ri for that stage until a decision is made. Sometimes acceptance is not allowed at the early stages of multiple sampling; however, rejection can occur at any stage. Efficiency measured by the ASN Efficiency for a multiple sampling scheme is measured by the average sample number (ASN) required for a given Type I and Type II set of errors. The number of samples needed when following a multiple sampling scheme may vary from trial to trial, and the ASN represents the average of what might happen over many trials with a fixed incoming defect level. b. What is replicated or interpenetrating sampling? The experiment should be reaped more than once. Thus, each treatment is applied in many experimental units instead of one. By doing so, the statistical accuracy of the experiments is increased. For example, suppose we are to examine the effect of two varieties of rice. For this purpose we may divide the field into two parts and grow one variety in one part and the other variety in the other part. We can compare the yield of the two parts and draw conclusion on that basis. But if we are to apply the principle of replication to this experiment, then we first divide the field into several parts, grow one variety in half of these parts and the other variety in the remaining parts. We can collect the data yield of the two varieities and draw conclusion by comaring the same. The result so obtained will be more reliable in comparison to the conclusion we draw without applying the principle of replication. The entire experiment can even by repeated several times for better results. Consequently replication does not present any difficulty, but computationally it does. However, it should be remembered that replication is introduced in order to increase the precision of a study, that is to say, to increase the accuracy with which the main effects and interations can be estimated. 5. a. How is secondary data useful to researcher? Secondary data is information gathered for purposes other than the completion of a research project. A variety of secondary information sources is available to the researcher gathering data on an industry, potential product applications and the market place. Secondary data is also used to gain initial insight into the research problem. Secondary data is classified in terms of its source either internal or external. Internal, or in-house data, is secondary information acquired within the organization where research is being carried out. External secondary data is obtained from outside sources. The two major advantages of using secondary data in market research are time and cost savings.
Chhavi Reg. No. 521038725 Page 13 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

The secondary research process can be completed rapidly generally in 2 to 3 week. Substantial useful secondary data can be collected in a matter of days by a skillful analyst. When secondary data is available, the researcher need only locate the source of the data and extract the required information. Secondary research is generally less expensive than primary research. The bulk of secondary research data gathering does not require the use of expensive, specialized, highly trained personnel. Secondary research expenses are incurred by the originator of the information.

There are also a number of disadvantages of using secondary data. These include:

Secondary information pertinent to the research topic is either not available, or is only available in insufficient quantities. Some secondary data may be of questionable accuracy and reliability. Even government publications and trade magazines statistics can be misleading. For example, many trade magazines survey their members to derive estimates of market size, market growth rate and purchasing patterns, then average out these results. Often these statistics are merely average opinions based on less than 10% of their members. Data may be in a different format or units than is required by the researcher. Much secondary data is several years old and may not reflect the current market conditions. Trade journals and other publications often accept articles six months before appear in print. The research may have been done months or even years earlier.

As a general rule, a thorough research of the secondary data should be undertaken prior to conducting primary research. The secondary information will provide a useful background and will identify key questions and issues that will need to be addressed by the primary research. Internal data sources Internal secondary data is usually an inexpensive information source for the company conducting research, and is the place to start for existing operations. Internally generated sales and pricing data can be used as a research source. The use of this data is to define the competitive position of the firm, an evaluation of a marketing strategy the firm has used in the past, or gaining a better understanding of the companys best customers. There are three main sources of internal data. These are: 1. Sales and marketing reports. These can include such things as:

Type of product/service purchased Type of end-user/industry segment Method of payment Product or product line Sales territory Salesperson

Chhavi

Reg. No. 521038725

Page 14 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

Date of purchase Amount of purchase Price Application by product Location of end-user

2. Accounting and financial records. These are often an overlooked source of internal secondary information and can be invaluable in the identification, clarification and prediction of certain problems. Accounting records can be used to evaluate the success of various marketing strategies such as revenues from a direct marketing campaign. There are several problems in using accounting and financial data. One is the timeliness factor it is often several months before accounting statements are available. Another is the structure of the records themselves. Most firms do not adequately setup their accounts to provide the types of answers to research questions that they need. For example, the account systems should capture project/product costs in order to identify the companys most profitable (and least profitable) activities. Companies should also consider establishing performance indicators based on financial data. These can be industry standards or unique ones designed to measure key performance factors that will enable the firm to monitor its performance over a period of time and compare it to its competitors. Some example may be sales per employee, sales per square foot, expenses per employee (salesperson, etc.). 3. Miscellaneous reports. These can include such things as inventory reports, service calls, number (qualifications and compensation) of staff, production and R&D reports. Also the companys business plan and customer calls (complaints) log can be useful sources of information. External data sources There is a wealth of statistical and research data available today. Some sources are:

Federal government Provincial/state governments Statistics agencies Trade associations General business publications Magazine and newspaper articles Annual reports Academic publications Library sources Computerized bibliographies Syndicated services.

Chhavi

Reg. No. 521038725

Page 15 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

A good place to start your search is the local city, college or university library. Most reference librarians are very knowledgeable about what data is available, or where to look to find it. Also contact government libraries and departments for research reports/publications they may have done. b. What are the criteria used for evaluation of secondary data? When a researcher wants to use secodary data for his research, he should evaluate them before deciding to use them. 1.Data Pertinence: The first consideration in evaluation is to examine the pertinence of the available secondary data to the research problem under study. The following questions should be considered. a) What are the definitions and classifications employed? Are they consistent? b) What are the measurements of variables used? What is the degree to which they conform to the requirements of our research? c What is the coverage of the secondary data in terms of topic and time? Does this coverage fit the needs of our research? 2.Data Quality: If the researcher is convinced about the available secondary data for his needs, the next step is to examine the quality of the data. The quality of data refers to their accuracy, reliability and completeness. The assurance and reliability of the available secondary data depends on the organization which collected them and the purpose for which they were collected. What is the authority and prestige of the organization? Is it well recognized? Is it noted for reliability? It is capable of collecting reliable data? Does it use trained and well qualified investigators? The answers to these questions determines the degree of confidence we can have in the data and their accuracy. It is important to go to the original source of the secondary data rather than to use an immediate source which has quoted from the original. Then only, the researcher can review the cautionary ands other comments that were made in the original source. 3.=Data Completeness: The completeness refers to the actual converage of the published data. This depends on the methodology and sampling design adopted by the original organization. Is the methodology sound? Is the sample sixe samll or large? Is the sampling method appropriate? Answers to these questions may indicate the appropriateness and adequacy of the data for the problem under study. The question of possible bias should also be examined. Whether the purpose for which the original organization collected the data had a particular orientation? Has the study been made to promote the organizations interest? How the study was conducted? These are important clues. The researcher must be on guard when the source does not report the methodology and sampling design. Then it is not possible to determine the adequacy of the secondary data for the researchers study. 6. What are the differences between observation and interviewing as methods of

Chhavi

Reg. No. 521038725

Page 16 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

data collection? Give two specific examples of situations where either observation or interviewing would be more appropriate.

Interviews In interviews information is obtained through inquiry and recorded by enumerators. Structured interviews are performed by using survey forms, whereas open interviews are notes taken while talking with respondents. The notes are subsequently structured (interpreted) for further analysis. Open-ended interviews, which need to be interpreted and analysed even during the interview, have to be carried out by well-trained observers and/or enumerators. As in preparing a questionnaire, it is important to pilot test forms designed for the interviews. The best attempt to clarify and focus by the designer cannot anticipate all possible respondent interpretations. A small-scale test prior to actual use for data collection will assure better data and avoid wasting time and money. Although structured interviews can be used to obtain almost any information, as with questionnaires, information is based on personal opinion. Data on variables such as catch or effort are potentially subject to large errors, due to poor estimates or intentional errors of sensitive information. Open-ended interviews Open-ended interviews cover a variety of data-gathering activities, including a number of social science research methods. Focus groups are small (5-15 individuals) and composed of representative members of a group whose beliefs, practises or opinions are sought. By asking initial questions and structuring the subsequent discussion, the facilitator/interviewer can obtain, for example, information on common gear use practices, responses to management regulations or opinions about fishing. Panel surveys involve the random selection of a small number of representative individuals from a group, who agree to be available over an extended period - often one to three years. During that period, they serve as a stratified random sample of people from whom data can be elicited on a variety of topics.

Structured interview Generally, structured interviews are conducted with a well-designed form already established. Forms are filled in by researchers, instead of respondents, and in that it differs from questionnaires. While this approach is more expensive, more complicated questions can be asked and data can be

Chhavi

Reg. No. 521038725

Page 17 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

validated as it is collected, improving data quality. Interviews can be undertaken with variety of data sources (fishers to consumers), and through alternative media, such as by telephone or in person. Structured interviews form the basis for much of the data collection in small-scale fisheries. In an interview approach for sample catch, effort and prices, the enumerators work according to a schedule of landing site visits to record data. Enumerators can be mobile (that is sites are visited on a rotational basis) or resident at a specific sampling site. Their job is to sample vessels, obtaining data on landings, effort and prices from all boat/gear types that are expected to operate during the sampling day. The sample should be as representative as possible of fleet activities. Some additional data related to fishing operations may be required for certain types of fishing units, such as beach seines or boats making multiple fishing trips in one day. For these, the interview may cover planned activities as well as activities already completed. In an interview approach for boat/gear activities, the enumerators work according to a schedule of homeport visits to record data on boat/gear activities. Enumerators can be mobile (that is homeports are visited on a rotational basis) or resident at a specific sampling site. In either case, their job is to determine the total number of fishing units (and if feasible, fishing gears) for all boat/gear types based at that homeport and number of those that have been fishing during the sampling day. There are several ways of recording boat/gear activities. In many cases, they combine the interview method with direct observations. Direct observations can be used to identify inactive fishing units by observing those that are moored or beached, and the total number of vessels based at the homeport are already known, perhaps from a frame survey or register. Often enumerators will still have to verify that vessels are fishing as opposed to other activities by using interviews during the visit. The pure interview approach can be used in those cases where a pre-determined sub-set of the fishing units has been selected. The enumerator's job is to trace all fishers on the list and, by means of interviewing, find out those that had been active during the sampling day. For sites involving a workable number of fishing units (e.g. not larger than 20), the interview may involve all fishing units. Sometimes it is possible to ask questions on fishing activity which refer to the previous day or even to two days back. This extra information increases the sample size significantly with little extra cost, ultimately resulting in better estimates of total fishing effort. Experience has shown that most of the variability in boat/gear activity is in time rather than space. Direct observations Observers Observers can make direct measurements on the fishing vessels, at landing sites, processing plants, or in markets. The variables that enumerators can collect include catch (landing and discards), effort,

Chhavi

Reg. No. 521038725

Page 18 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

vessel/gears, operations, environmental variables (e.g. sea state, temperature), biological variables (e.g. length, weight, age), the values and quantities of landings and sales. In practice, observers do not only make direct measurements (observations), but also conduct interviews and surveys using questionnaires. They might also be involved in data processing and analysis. The tasks of an observer are difficult and adequate training and supervision are therefore essential. Clear decisions need to be made on the nature and extent of data collected during any one trip. Often, the amount of data and frequency of collection can be established analytically with preliminary data. Preferably, observers should only collect data, not carry out other activities, such as enforcement, licensing or tax collection. This should help to minimise bias by reducing the incentives to lie. Problems in terms of conflicts between data collection and law enforcement, for example, can be reduced by clear demarcation, separating activities by location or time. This becomes a necessity for at-sea observers. Their positions on fishing vessels and the tasks that they perform depend significantly on a good working relationship with the captain and crew, which can be lost if they are perceived as enforcement personnel. The major data obtained through at-sea observers are catch and effort data, which are often used for cross checking fishing logs. At the same time, the at-sea observers can collect extra biological (fish size, maturity, and sex), by-catch and environmental data, as well as other information on the gears, fishing operations etc. Frequently, discards data can only be collected by at-sea observers. The main data obtained from observers at landing sites, processing plants and markets include landing (amount, quality, value and price), biological (size, maturity), and effort (how many hauls, hours fishing) data. For the large-scale fishery where a logbook system is used, data collected at landing sites could be used to crosscheck data recorded in logbooks. Data collected from processing plants include quantities by species and, especially in modern factory practices, the batch number of raw materials, which can sometimes be traced back to fishing vessels. These data if collected can be used to validate landing data. Collecting data to estimate raising factors for converting landed processed fish weight to the whole weight equivalent may be necessary. By sampling fish before and after processing, conversion factors may be improved. Potential seasonal, life history stage and other variations in body/gut weight ratios suggest date, species, sex and size should be recorded in samples. Economic and demographic data at each level (e.g. input and output of various products to and from market and processors) are usually obtained by interview and questionnaire. However, the data directly collected by enumerators can also be the major source as well as supporting data for those collected through other methods. While product data in processing plants can be collected through questionnaire (6.3.2) or interview (6.3.3), enumerators can directly collect many physical variables (weight, number, size etc.) more

Chhavi

Reg. No. 521038725

Page 19 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

accurately. Automatic scales, through which a continuous stream of fish passes, can record the weight of fish mechanically or through computerised sensors. Similarly, mechanical or automatic weighing bins for whole frozen or defrosted fish, prior to entry to a processing line or cold store, can be used to record weights for each batch. Otherwise, boxes need to be counted and sub-sampled to ensure their fish contents are correctly identified and weighed. Fish is often landed in bulk together with non-fish materials (e.g. ice, brine slurry, packing material and pallets). It can be very difficult to estimate the total fish weight, let alone weight by species, product and size grade. Methods need to be established to record whether non-fish material is included in any weighing process (e.g. are scales set to automatically subtract pallet weight?). In the case of processed fish in sealed boxes, it may be that sampling to determine an average weight and then box or pallet counting is sufficient. Alternatively, each box or pallet is weighed and a note taken whether box and pallet weight should be subtracted at a later data when processing the data. Complete landings of all catch in relation to a vessel's trip (i.e. emptying of holds) is preferred since records can then be matched against logsheets. However, in some circumstances off-loading in harbours, at the dock or at sea may only be partial, some being retained on board until the next offloading. In this case, records should be maintained of both catch landed and retained on board.

Chhavi

Reg. No. 521038725

Page 20 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

B0050 Research Methodology


Semester: 3 - Assignment Set: 2 1. a. Explain the General characteristics of observation. Answer: Every moment we are exposed to some kind of events or occurrences. If we try to frame some definite opinion about the events we come across, we have to observe the instances keenly. The random observation of instances won't help for any clear understanding of the phenomena of nature. In the words of Jevon, "To observe is merely to notice events and changes which are produced in the ordinary course of nature, without being able, or at least attempting to control or vary those changes". In this way, observations performed with a definite purpose are different from the casual perceptions. 'Observation' has been derived from the Greek words 'ob' and 'servare'. The above two words stand for the meanings 'to keep' and 'before the mind' respectively. The knowledge derived by placing something before the mind leads to observational knowledge. Usually the perceptual knowledge is considered as the observational knowledge. But in respect of the inductive reasoning 'observation' has been defined as regulated perception with a definite purpose. It shows that three factors are involved in the case of an observation. There must be some object to be observed, the sense organs to observe the object and the mind to become aware of it. This process is repeated for several times in order to arrive at a conclusion. Characteristics: i. Observation is the case of regulated perception of events. Observations are made by help of sense organs. So it is basically perceptual. Perception may be either external or internal. Perception of natural events or occurrences is external perception. To know something directly by introspection without using the sense organs is called internal perception. Feeling of sorrow, joy, happiness etc. is internal perception. A vast nature is present before us. Every moment we come across some event of nature. When similar types of events are observed in repeated manner, one feels to find out an explanation with regard to the functioning of nature. That helps us to distinguish the random or casual perception from regulated perception. ii. Observation should be systematic and selective. Observation excludes the cases of careless and stray perceptions.

Chhavi

Reg. No. 521038725

Page 21 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

It should be systematic and selective. When the purpose of observation is decided we select those instances, which have got relevance with the purpose. Suppose we want to observe the colour of the crows. Then out of the different types of birds we select only crows to observe. Hence perception should not be careless or a casual one. The aim of perceptions is to establish some generalized truths. A general truth cannot be derived from stray or casual perception. The perception should be systematic and selective. Observation should be impartial and free from any bias. It means that the observation should be strictly objective. Sometimes in order to establish a definite conclusion we overlook certain instances, which are not favourable to the conclusion. For example, when a sales representative demonstrates the utilities of a particular product he only shows us some of the suitable utilities of it. He overlooks those instances, which are not favourable for the purpose of demonstration. This is an example of biased observation. Such types of biased observation should be avoided. Observations should be objective. Similarly, observation should be neutral. If the neutrality is not maintained it may lead to fallacious observations. For example, while evaluating the answer scripts if the examiner thinks that he is evaluating the scripts of brilliant students then the mistakes present in the answer script may be overlooked. A prejudiced mind cannot make observation neutral. If a person is biased, then his observation will not be true or objective. Joyce has pointed out that very often observations are not free from subjective influences. There can be three types of subjective influences of the observer, namely, intellectual, physical and moral. a) The intellectual condition refers to the interest and sincerity of the observer for knowing. If there is no desire to know something then careful and objective observation may not take place. Because of this condition we make a distinction between intentional observation and baseless observations. A sound mind of the observer helps in satisfying this condition. b) The sense organs of the observer should not be defective. In such cases the observations will lead to fallacious observations. Moreover, our sense organs have limited ability of perceiving the things. The germs are not visible to naked eyes. Many stars and planets are not visible to us. A colour blind man cannot observe colours perfectly. In such cases if the proper instruments are not used erroneous observations take place. Hence the physical condition should be satisfied for true and unbiased observations. c) The third condition is moral one. It is obvious that for impartial observation there should not be any dogma or bias. Thus for impartial observation the observer should be free from impositions or

Chhavi

Reg. No. 521038725

Page 22 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

any influences. Unless one is having a free and impartial mind his observation may not be objective, real and accurate. v. Observation is the active process of knowing the truth. Knowledge through observation is always active. The involvement of sense organs makes it active. Of course, the experiments are more active as compared to observations. But observations are not passive. vi. Observations should be simple and direct observations help in knowing the uncontroversial truths. Since the aim of observation is to obtain right knowledge and to establish the material truth of a general proposition it should be simple and direct. b. What is the Utility of Observation in Business Research? Answer: If our goal is to find an idea or business opportunity, we may need information on the unmet needs of consumers, the most popular products, the most profitable businesses, new tastes, new fashions, etc. In this case, using the observational technique would be to go to markets or shopping areas and see what are the most popular products or services, which are the products ordered, but are not found, look at the products that could be replaced by others who might have a better reception, etc.. If our goal is to analyze our target, we may need information about your preferences, tastes, desires, behaviors, habits, etc. In this case, using the observational technique would be to visit places where our target audience frequents, and watch them walk around the area, review products, ask questions, and choose certain products. If our goal is to analyze the competition, we may need information about their products, processes, personnel, strategies, etc. In this case, using the observational technique would be to visit their premises and look at your processes, staff performance, customer reactions, etc. Or we could visit the markets or shopping areas where they market their products and see their products (models, brands), prices (price discounts), their distribution strategies (market outlets), promotion strategies (means advertising, advertising, slogans, etc.). Or, we may choose to purchase any of their products in order to analyze them better. If our goal is to evaluate the feasibility of leasing a new location for our business, we may need information on customer traffic, local accessibility, local visibility, etc. In this case, using the observational technique could be to wander around the local area as possible, observe customer traffic in the area throughout the day, the existence of local competition and the number of visitors they have, the environment in the area, etc. The advantages of using the observation technique is that we can obtain accurate information that could not otherwise obtain, for example, information on spontaneous behaviors that occur only in everyday life and its natural resources, or information that people could not or unwilling to provide

Chhavi

Reg. No. 521038725

Page 23 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

us for several reasons. Also, another advantage is that the technique is inexpensive and easy to apply. However, disadvantages of this technique are the inability to identify emotions, attitudes, or the motivations that lead a consumer to perform an act. It is always advisable to use it alongside other research techniques. 2.(a) Briefly explain Interviewing techniques in Business Research? Answer: Interviewing a person requires good communication skills, presence of mind, and general sense of logic. Interviews can be of different types like business interview, research interview, media interview and so on. Research interviews require interviewing with the primary aim to gain information from the subjects, and hence require deep study of the research subject, as well as a clear idea of what information is required from the subjects. Media interviews require a great presence of mind, and study of the person you want to interview. Business interviews are a tough job no matter which side of the table you are sitting on. If you though it is a tough job to appear for an interview, then you must know that interviewing a candidate is no childs play. There are many things that need to be kept in mind while you interview a candidate. The basic interviewing techniques need to be followed, before the interview, during the interview and after the interview as well. Let us have a step-by-step look at the basic interviewing techniques. Before the Interview There are a few pointers which one must keep in mind before you actually interview a candidate. Here are some of the basic things that you need to keep in mind before the interview: Make sure you inform the candidate about the venue and timing of the interview well in advance. Also it is considered polite to convey the approximate duration of the interview to the candidate. Before the interview you, make sure you have a clear idea of the qualities and technical skills that you are looking out for. Plan the structure of the interview, determine if you want to keep it a rigorous question answer session, or want the candidate to speak his mind about general issues. If you want to keep it a question answer session, make sure you list down all the probable questions and even if you want to make it an impromptu affair, it is better to list down the general points that you want to address during the interview. In case you already have the candidates resume, reference letters or any other documents which have been submitted beforehand, make it a point to go through the documents. It will give you an idea about the candidates educational qualification, work experience and other useful

Chhavi

Reg. No. 521038725

Page 24 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

information. This will help you answer better and more relevant questions. At the end of the interview tell the candidate the period The Actual Interview The actual interview requires a combination of good communication skills; presence of mind and enough research about the companys requirements as well the job profile. Here are some of the basic techniques to remember while the interview: Greet the person with a smile and a professional handshake. Make sure the candidate is comfortable. Do not intimidate the candidate. Make sure to mention things from the candidates resume or other submitted material that you find impressive or problematic. Do not ask personal questions during a job interview. Listen carefully while your candidate speaks. Ask questions about the things that the candidate mentions during the interview. In case the candidate seems reluctant or is stuck at a particular point, make sure to lead the conversation further by introducing another issue. Keep the interview like a healthy conversation rather than making it a question answer or quiz session. Keep your watch on the eye, make sure the interview doesnt extend way beyond the set time. At the end of the interview, make sure you tell the candidate the time frame within which you will inform him the decision, or schedule the next interview.

After the Interview After the interview, make sure you call up the candidate within the promised time frame. Inform the decision, without beating around the bush. In case you have rejected the candidate, be polite while conveying the message, and also assure that if in future the company needs their services, you will contact them for sure. Remember that interviewing a person is a tough job. Keep all the above points in mind and rest assured that the interview will be a smooth sailing b. What are the problems encountered in Interview? Answer: Problems Encountered

As in most surveys, difficulties were encountered during the data collection period, and several of these were similar across countries. One of the common problems encountered by the research
Chhavi Reg. No. 521038725 Page 25 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

teams was the limited time allocated, as the questionnaires were quite long (it took approximately 12 hours to complete one questionnaire). The interviewers had to collect primary data from a considerable number of farms in a very short period of time. In addition, the geographical location of the farms presented a difficulty, as many of them were sparsely distributed in remote areas of the provinces. For example, in Brazil, access to farms was a problem because most of the farms were located as far as 17 kilometers away from municipal, state, or federal roads. Generally all country teams encountered reluctance on the part of farm decision-makers to cooperate and share information with the interviewers. This was one of the reasons why the planned number of samples was not met. There were also several instances when the respondents, especially those with very large farms, failed to keep their initial interview appointments with the research team after all the arrangements had been made. They were either out of the office or farm, or simply changed their minds and refused outright to be interviewed. A second round of appointments was therefore required. Extreme difficulty was also experienced in obtaining permission to interview decisionmakers in large commercial farms. For instance, in Thailand, some of the medium and large farms refused to participate without prior appointment, and some gave vague answers and figures, underestimating values especially concerning sales and profits (showing business loss over the years), because of fear that revealed information might be used against them as grounds for tax fraudulence charges. Some were hesitant to share information because their integrators prohibit them from exposing it (particularly data on capital investment, costs, and returns) to the interviewers. In such cases, interviewers had to look for replacements in order to meet the targeted number of respondents. Small-scale farmers, on the other hand, were very much willing to share information, but had difficulty in recalling some of it, as they generally do not keep records of their expenditures and costs. Respondents' difficulty in recalling information was a general problem in all study sites. In particular, obtaining data on feed costs was problematic because farmers were cautious in revealing the actual mix of ingredients they use for feeds (in the case of broilers, layers, and swine). It was also difficult to calculate feed costs for dairy because some farms would let cattle graze on open field. It was difficult to find a sufficient sample for different types of management, such as independents and contracts, since in some countries one or the other dominates. In Thailand, independent layer producers dominated; in India, there were no contracts for layers. In the Philippines, hog contract growers were concentrated in one or two provinces, large independent commercial broiler farms were difficult to identify and locate because of outdated lists, and small independents were extremely small (up to only 100 birds). In Brazil, contract and commercial broiler growers were prevalent; while in India, there was no up-to-date information on the population of poultry farms, making it difficult to select appropriate samples. 3a. What are the various steps in processing of data? Ans. 5 Steps to Data Processing

Chhavi

Reg. No. 521038725

Page 26 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

Data is an integral part of all business processes. It is the invisible backbone that supports all the operations and activities within a business. Without access to relevant data, businesses would get completely paralyzed. This is because quality data helps formulate effective business strategies and fruitful business decisions. Therefore, the quality of data should be maintained in good condition in order to facilitate smooth business proceedings. In order to enhance business proceedings, data should be made available in all possible forms in order to increase the accessibility of the same. Data processing refers to the process of converting data from one format to another. It transforms plain data into valuable information and information into data. Clients can supply data in a variety of forms, be it .xls sheets, audio devices, or plain printed material. Data processing services take the raw data and process it accordingly to produce sensible information. The various applications of data processing can convert raw data into useful information that can be used further for business processes. Companies and organizations across the world make use of data processing services in order to facilitate their market research interests. Data consists of facts and figures, based on which important conclusions can be drawn. When companies and organizations have access to useful information, they can utilize it for strategizing powerful business moves that would eventually increase the company revenue and decrease the costs, thus expanding the profit margins. Data processing ensures that the data is presented in a clean and systematic manner and is easy to understand and be used for further purposes. Here are the 5 steps that are included in data processing: Editing There is a big difference between data and useful data. While there are huge volumes of data available on the internet, useful data has to be extracted from the huge volumes of the same. Extracting relevant data is one of the core procedures of data processing. When data has been accumulated from various sources, it is edited in order to discard the inappropriate data and retain relevant data. Coding Even after the editing process, the available data is not in any specific order. To make it more sensible and usable for further use, it needs to be aligned into a particular system. The method of coding ensures just that and arranges data in a comprehendible format. The process is also known as netting or bucketing. Data Entry After the data has been properly arranged and coded, it is entered into the software that performs the eventual cross tabulation. Data entry professionals do the task efficiently. Validation After the cleansing phase, comes the validation process. Data validation refers to the process of

Chhavi

Reg. No. 521038725

Page 27 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

thoroughly checking the collected data to ensure optimal quality levels. All the accumulated data is double checked in order to ensure that it contains no inconsistencies and is utterly relevant. Tabulation This is the final step in data processing. The final product i.e. the data is tabulated and arranged in a systematic format so that it can be further analyzed. All these processes make up the complete data processing activity which ensures the said data is available for access b. How is data editing is done at the Time of Recording of Data? Answer: Processing of data is editing of the data instruments. Editing is a process of checking to detect and correct errors and omissions. Data editing happens at two stages, one at the time of recording of the data and second at the time of analysis of data. Data Editings at the Time of Recording of Data Document editing and testing of the data at the time of data recording is done considering the following questions in mind. 1. 2. 3.
4.

Do the filters agree or are the data inconsistent? Have missing values been set to values, which are the same for all research questions? Have variable descriptions been specified?
Have lables for variable names and value lables been defined and written?

All editing and cleaning steps are documented, so that, the redefinition of variables or later analytical modification requirements could be easily incorporated into the data sets. a. What are the fundamental of frequency Distribution? Answer: The fundamental frequency, often referred to simply as the fundamental and abbreviated f0 or F0, is defined as the lowest frequency of a periodic waveform. In terms of a superposition of sinusoids (e.g. Fourier series), the fundamental frequency is the lowest frequency sinusoidal in the sum. All sinusoidal and many non-sinusoidal waveforms are periodic, which is to say they repeat exactly over time. A single period is thus the smallest repeating unit of a signal, and one period describes the signal completely. We can show a waveform is periodic by finding some period T for which the following equation is true: x(t) = x(t + T) = x(t + 2T) = x(t + 3T) = ...

Chhavi

Reg. No. 521038725

Page 28 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

Where x(t) is the function of the waveform. This means that for multiples of some period T the value of the signal is always the same. The lowest value of T for which this is true is called the fundamental period (T0) and thus the fundamental frequency (F0) is given by the following equation: Where F0 is the fundamental frequency and T0 is the fundamental period. The fundamental frequency of a sound wave in a tube with a single CLOSED end can be found using the following equation: L can be found using the following equation:

(lambda) can be found using the following equation:

The fundamental frequency of a sound wave in a tube with either both ends OPEN or both ends CLOSED can be found using the following equation:

L can be found using the following equation:

The wavelength, which is the distance in the medium between the beginning and end of a cycle, is found using the following equation:

Where: F0 = fundamental Frequency L = length of the tube

Chhavi

Reg. No. 521038725

Page 29 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

v = velocity of the sound wave = wavelength At 20 C (68 F) the speed of sound in air is 343 m/s (1129 ft/s). This speed is temperature dependent and does increase at a rate of 0.6 m/s for each degree Celsius increase in temperature (1.1 ft/s for every increase of 1 F). The velocity of a sound wave at different temperatures:

v = 343.2 m/s at 20 C v = 331.3 m/s at 0 C

Mechanical systems Consider a beam, fixed at one end and having a mass attached to the other; this would be a single degree of freedom (SDoF) oscillator. Once set into motion it will oscillate at its natural frequency. For a single degree of freedom oscillator, a system in which the motion can be described by a single coordinate, the natural frequency depends on two system properties: mass and stiffness. The radian frequency, n, can be found using the following equation:

Where: k = stiffness of the beam m = mass of weight n = radian frequency (radians per second) From the radian frequency, the natural frequency, fn, can be found by simply dividing n by 2. Without first finding the radian frequency, the natural frequency can be found directly using:

Where: fn = natural frequency in hertz (cycles/second) k = stiffness of the beam (Newtons/Meter or N/m) m = mass at the end (kg) while doing the modal analysis of structures and mechanical equipments, the frequency of 1st mode is called fundamental frequency. b. What are the types and general rules for graphical representation of data? Answer: REPRESENTATION OF DATA

Chhavi

Reg. No. 521038725

Page 30 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

Besides the tabular form, the data may also be presented in some graphic or diagrammatic form. The transformation of data through visual methods like graphs, diagrams, maps and charts is called representation of data. The need of representing data graphically: Graphics, such as maps, graphs and diagrams, are used to represent large volume of data.They are necessary: f the information is presented in tabular form or in a descriptive record, it becomes I difficult to draw results. Graphical form makes it possible to easily draw visual impressions of data. The graphic method of the representation of data enhances our understanding. It makes the comparisons easy. Besides, such methods create an imprint on mind for a longer time. It is a time consuming task to draw inferences about whatever is being presented in nongraphical form. It presents characteristics in a simplified way. These makes it easy to understand the patterns of population growth, distribution and the density, sex ratio, agesex composition, occupational structure, etc. General Rules for Drawing Graphs, Diagrams and Maps 1. Selection of a Suitable Graphical Method Each characteristic of the data can only be suitably represented by an appropriate graphical method. For example, To show the data related to the temperature or growth of population between different periods in time line graph are used. Similarly, bar diagrams are used for showing rainfall or the production of commodities. The population distribution, both human and livestock, or the distribution of the crop producing areas are shown by dot maps.

Chhavi

Reg. No. 521038725

Page 31 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

The population density can be shown by choropleth maps. Thus, it is necessary and important to select suitable graphical method to represent data. 1. Selection of Suitable Scale Each diagram or map is drawn to a scale which is used to measure the data. The scale must cover the entire data that is to be represented. The scale should neither be too large nor too small. 5. Strictly speaking, would case studies be considered as scientific research? Why or why not? Answer: A case study is an intensive analysis of an individual unit (e.g., a person, group, or event) stressing developmental factors in relation to context The case study is common in social sciences and life sciences. Case studies may be descriptive or explanatory. The latter type is used to explore causation in order to find underlying principles. They may be prospective, in which criteria are established and cases fitting the criteria are included as they become available, or retrospective, in which criteria are established for selecting cases from historical records for inclusion in the study. Thomas[4] offers the following definition of case study: "Case studies are analyses of persons, events, decisions, periods, projects, policies, institutions, or other systems that are studied holistically by one or more methods. The case that is the subject of the inquiry will be an instance of a class of phenomena that provides an analytical frame an object within which the study is conducted and which the case illuminates and explicates." Rather than using samples and following a rigid protocol (strict set of rules) to examine limited number of variables, case study methods involve an in-depth, longitudinal (over a long period of time) examination of a single instance or event: a case. They provide a systematic way of looking at events, collecting data, analyzing information, and reporting the results. As a result the researcher may gain a sharpened understanding of why the instance happened as it did, and what might become important to look at more extensively in future research. Case studies lend themselves to both generating and testing hypotheses.[5] Another suggestion is that case study should be defined as a research strategy, an empirical inquiry that investigates a phenomenon within its real-life context. Case study research means single and multiple case studies, can include quantitative evidence, relies on multiple sources of evidence and benefits from the prior development of theoretical propositions. Case studies should not be confused with qualitative research and they can be based on any mix of quantitative and qualitative evidence. Single-subject research provides the statistical framework for making inferences from quantitative case-study data.[3][6] This is also supported and well-formulated in (Lamnek, 2005): "The case study is a research approach, situated between concrete data taking techniques and methodologic paradigms."

Chhavi

Reg. No. 521038725

Page 32 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

The case study is sometimes mistaken for the case method, but the two are not the same.

Case selection and structure of the case study An average, or typical, case is often not the richest in information. In clarifying lines of history and causation it is more useful to select subjects that offer an interesting, unusual or particularly revealing set of circumstances. A case selection that is based on representativeness will seldom be able to produce these kinds of insights. When selecting a subject for a case study, researchers will therefore use information-oriented sampling, as opposed to random sampling.[5] Outlier cases (that is, those which are extreme, deviant or atypical) reveal more information than the putatively representative case. Alternatively, a case may be selected as a key case, chosen because of the inherent interest of the case or the circumstances surrounding it. Or it may be chosen because of researchers' in-depth local knowledge; where researchers have this local knowledge they are in a position to soak and poke as Fenno[7] puts it, and thereby to offer reasoned lines of explanation based on this rich knowledge of setting and circumstances. Three types of cases may thus be distinguished: 1. Key cases 2. Outlier cases 3. Local knowledge cases Whatever the frame of reference for the choice of the subject of the case study (key, outlier, local knowledge), there is a distinction to be made between the subject and the object of the case study. The subject is the practical, historical unity [8] through which the theoretical focus of the study is being viewed. The object is that theoretical focus the analytical frame. Thus, for example, if a researcher were interested in US resistance to communist expansion as a theoretical focus, then the Korean War might be taken to be the subject, the lens, the case study through which the theoretical focus, the object, could be viewed and explicated.[9] Beyond decisions about case selection and the subject and object of the study, decisions need to be made about purpose, approach and process in the case study. Thomas[10] thus proposes a typology for the case study wherein purposes are first identified (evaluative or exploratory), then approaches are delineated (theory-testing, theory-building or illustrative), then processes are decided upon, with a principal choice being between whether the study is to be single or multiple, and choices also about whether the study is to be retrospective, snapshot or diachronic, and whether it is nested, parallel or sequential. It is thus possible to take many routes through this typology, with, for example, an exploratory, theory-building, multiple, nested study, or an evaluative, theory-testing, single, retrospective study. The typology thus offers many permutations for case study structure. Generalizing from case studies A critical case can be defined as having strategic importance in relation to the general problem. A critical case allows the following type of generalization, If it is valid for this case, it is valid for all

Chhavi

Reg. No. 521038725

Page 33 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

(or many) cases. In its negative form, the generalization would be, If it is not valid for this case, then it is not valid for any (or only few) cases. The case study is also effective for generalizing using the type of test that Karl Popper called falsification, which forms part of critical reflexivity.[5] Falsification is one of the most rigorous tests to which a scientific proposition can be subjected: if just one observation does not fit with the proposition it is considered not valid generally and must therefore be either revised or rejected. Popper himself used the now famous example of, "All swans are white," and proposed that just one observation of a single black swan would falsify this proposition and in this way have general significance and stimulate further investigations and theory-building. The case study is well suited for identifying "black swans" because of its in-depth approach: what appears to be "white" often turns out on closer examination to be "black." Galileo Galileis rejection of Aristotles law of gravity was based on a case study selected by information-oriented sampling and not random sampling. The rejection consisted primarily of a conceptual experiment and later on of a practical one. These experiments, with the benefit of hindsight, are self-evident. Nevertheless, Aristotles incorrect view of gravity dominated scientific inquiry for nearly two thousand years before it was falsified. In his experimental thinking, Galileo reasoned as follows: if two objects with the same weight are released from the same height at the same time, they will hit the ground simultaneously, having fallen at the same speed. If the two objects are then stuck together into one, this object will have double the weight and will according to the Aristotelian view therefore fall faster than the two individual objects. This conclusion seemed contradictory to Galileo. The only way to avoid the contradiction was to eliminate weight as a determinant factor for acceleration in free fall. Galileos experimentalism did not involve a large random sample of trials of objects falling from a wide range of randomly selected heights under varying wind conditions, and so on. Rather, it was a matter of a single experiment, that is, a case study Galileos view continued to be subjected to doubt, however, and the Aristotelian view was not finally rejected until half a century later, with the invention of the air pump. The air pump made it possible to conduct the ultimate experiment, known by every pupil, whereby a coin or a piece of lead inside a vacuum tube falls with the same speed as a feather. After this experiment, Aristotles view could be maintained no longer. What is especially worth noting, however, is that the matter was settled by an individual case due to the clever choice of the extremes of metal and feather. One might call it a critical case, for if Galileos thesis held for these materials, it could be expected to be valid for all or a large range of materials. Random and large samples were at no time part of the picture. However it was Galileo's view that was the subject of doubt as it was not reasonable enough to be the Aristotelian view. By selecting cases strategically in this manner one may arrive at case studies that allow generalization For more on generalizing from case studies, see [3] hiya The case study paradox Case studies have existed as long as recorded history. Much of what is known about the empirical world has been produced by case study research, and many of the classics in a long range of disciplines are case studies, including in psychology, sociology, anthropology, history, education,

Chhavi

Reg. No. 521038725

Page 34 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

economics, political science, management, geography, biology, and medical science. Half of all articles in the top political science journals use case studies, for instance. But there is a paradox here, as argued by Oxford professor Bent Flyvbjerg. At the same time that case studies are extensively used and have produced canonical works, one may observe that the case study is generally held in low regard, or is simply ignored, within the academy. Statistics on courses offered in universities confirm this. It has been argued that the case study paradox exists because the case study is widely misunderstood as a research method. Flyvbjerg argues that by clearing the misunderstandings about the case study, the case study paradox may be resolved. The debate regarding case study's value as a research method Flyvbjerg (2006) identified five statements regarding the limitations of case study as a research: 1. General, theoretical knowledge is more valuable than concrete, practical knowledge. 2. One cannot generalize on the basis of an individual case and, therefore, the case study cannot contribute to scientific development. 3. The case study is most useful for generating hypotheses, whereas other methods are more suitable for hypotheses testing and theory building. 4. The case study contains a bias toward verification, i.e., a tendency to confirm the researchers preconceived notions. 5. It is often difficult to summarize and develop general propositions and theories on the basis of specific case studies. These statements can be said to represent the cautionary view of case studies in conventional philosophy of science. Flyvbjerg (2006) argued that these statements are too categorical, and argued for the value of phenomenological insights gleamed by closely examining contextual "expert knowledge". History of the case study It is generally believed that the case-study method was first introduced into social science by Frederic Le Play in 1829 as a handmaiden to statistics in his studies of family budgets. (Les Ouvriers Europeens (2nd edition, 1879).[12] The use of case studies for the creation of new theory in social sciences has been further developed by the sociologists Barney Glaser and Anselm Strauss who presented their research method, Grounded theory, in 1967. The popularity of case studies in testing hypotheses has developed only in recent decades. One of the areas in which case studies have been gaining popularity is education and in particular educational evaluation.[13] Case studies have also been used as a teaching method and as part of professional development, especially in business and legal education. The problem-based learning (PBL) movement is such an

Chhavi

Reg. No. 521038725

Page 35 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

example. When used in (non-business) education and professional development, case studies are often referred to as critical incidents. When the Harvard Business School was started, the faculty quickly realized that there were no textbooks suitable to a graduate program in business. Their first solution to this problem was to interview leading practitioners of business and to write detailed accounts of what these managers were doing. Cases are generally written by business school faculty with particular learning objectives in mind and are refined in the classroom before publication. Additional relevant documentation (such as financial statements, time-lines, and short biographies, often referred to in the case as "exhibits"), multimedia supplements (such as video-recordings of interviews with the case protagonist), and a carefully crafted teaching note often accompany cases. 6. a. Analyse the case study and descriptive approach to research. Answer: Descriptive research does not fit neatly into the definition of either quantitative or qualitative research methodologies, but instead it can utilize elements of both, often within the same study. The term descriptive research refers to the type of research question, design, and data analysis that will be applied to a given topic. Descriptive statistics tell what is, while inferential statistics try to determine cause and effect. The type of question asked by the researcher will ultimately determine the type of approach necessary to complete an accurate assessment of the topic at hand. Descriptive studies, primarily concerned with finding out "what is," might be applied to investigate the following questions: Do teachers hold favorable attitudes toward using computers in schools? What kinds of activities that involve technology occur in sixth-grade classrooms and how frequently do they occur? What have been the reactions of school administrators to technological innovations in teaching the social sciences? How have high school computing courses changed over the last 10 years? How do the new multimediated textbooks compare to the print-based textbooks? How are decisions being made about using Channel One in schools, and for those schools that choose to use it, how is Channel One being implemented? What is the best way to provide access to computer equipment in schools? How should instructional designers improve software design to make the software more appealing to students? To what degree are special-education teachers well versed concerning assistive technology? Is there a relationship between experience with multimedia computers and problemsolving skills? How successful is a certain satellite-delivered Spanish course in terms of motivational value and academic achievement? Do teachers actually implement technology in the way they perceive? How many people use the AECT gopher server, and what do they use if for? Descriptive research can be either quantitative or qualitative. It can involve collections of quantitative information that can be tabulated along a continuum in numerical form, such as scores on a test or the number of times a person chooses to use a-certain feature of a multimedia program, or it can describe categories of information such as gender or patterns of interaction when using technology in a group situation. Descriptive research involves gathering data that describe events and then organizes, tabulates, depicts, and describes the data collection (Glass & Hopkins, 1984). It often uses visual aids such as graphs and charts to aid the reader in understanding the data

Chhavi

Reg. No. 521038725

Page 36 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

distribution. Because the human mind cannot extract the full import of a large mass of raw data, descriptive statistics are very important in reducing the data to manageable form. When in-depth, narrative descriptions of small numbers of cases are involved, the research uses description as a tool to organize data into patterns that emerge during analysis. Those patterns aid the mind in comprehending a qualitative study and its implications. Most quantitative research falls into two areas: studies that describe events and studies aimed at discovering inferences or causal relationships. Descriptive studies are aimed at finding out "what is," so observational and survey methods are frequently used to collect descriptive data (Borg & Gall, 1989). Studies of this type might describe the current state of multimedia usage in schools or patterns of activity resulting from group work at the computer. An example of this is Cochenour, Hakes, and Neal's (1994) study of trends in compressed video applications with education and the private sector. Descriptive studies report summary data such as measures of central tendency including the mean, median, mode, deviance from the mean, variation, percentage, and correlation between variables. Survey research commonly includes that type of measurement, but often goes beyond the descriptive statistics in order to draw inferences. See, for example, Signer's (1991) survey of computer-assisted instruction and at-risk students, or Nolan, McKinnon, and Soler's (1992) research on achieving equitable access to school computers. Thick, rich descriptions of phenomena can also emerge from qualitative studies, case studies, observational studies, interviews, and portfolio assessments. Robinson's (1994) case study of a televised news program in classrooms and Lee's (1994) case study about identifying values concerning school restructuring are excellent examples of case studies. Descriptive research is unique in the number of variables employed. Like other types of research, descriptive research can include multiple variables for analysis, yet unlike other methods, it requires only one variable (Borg & Gall, 1989). For example, a descriptive study might employ methods of analyzing correlations between multiple variables by using tests such as Pearson's Product Moment correlation, regression, or multiple regression analysis. Good examples of this are the Knupfer and Hayes (1994) study about the effects of the Channel One broadcast on knowledge of current events, Manaev's (1991) study about mass media effectiveness, McKenna's (1993) study of the relationship between attributes of a radio program and it's appeal to listeners, Orey and Nelson's (1994) examination of learner interactions with hypermedia environments, and Shapiro's (1991) study of memory and decision processes. On the other hand, descriptive research might simply report the percentage summary on a single variable. Examples of this are the tally of reference citations in selected instructional design and technology journals by Anglin and Towers (1992); Barry's (1994) investigation of the controversy surrounding advertising and Channel One; Lu, Morlan, Lerchlorlarn, Lee, and Dike's (1993) investigation of the international utilization of media in education (1993); and Pettersson, Metallinos, Muffoletto, Shaw, and Takakuwa's (1993) analysis of the use of verbo-visual information in teaching geography in various countries.

Chhavi

Reg. No. 521038725

Page 37 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

Descriptive statistics utilize data collection and analysis techniques that yield reports concerning the measures of central tendency, variation, and correlation. The combination of its characteristic summary and correlational statistics, along with its focus on specific types of research questions, methods, and outcomes is what distinguishes descriptive research from other research types. Three main purposes of research are to describe, explain, and validate findings. Description emerges following creative exploration, and serves to organize the findings in order to fit them with explanations, and then test or validate those explanations (Krathwohl, 1993). Many research studies call for the description of natural or man-made phenomena such as their form, structure, activity, change over time, relation to other phenomena, and so on. The description often illuminates knowledge that we might not otherwise notice or even encounter. Several important scientific discoveries as well as anthropological information about events outside of our common experiences have resulted from making such descriptions. For example, astronomers use their telescopes to develop descriptions of different parts of the universe, anthropologists describe life events of socially atypical situations or cultures uniquely different from our own, and educational researchers describe activities within classrooms concerning the implementation of technology. This process sometimes results in the discovery of stars and stellar events, new knowledge about value systems or practices of other cultures, or even the reality of classroom life as new technologies are implemented within schools. Educational researchers might use observational, survey, and interview techniques to collect data about group dynamics during computer-based activities. These data could then be used to recommend specific strategies for implementing computers or improving teaching strategies. Two excellent studies concerning the role of collaborative groups were conducted by Webb (1982), and Rysavy and Sales (1991). Noreen Webb's landmark study used descriptive research techniques to investigate collaborative groups as they worked within classrooms. Rysavy and Sales also apply a descriptive approach to study the role of group collaboration for working at computers. The Rysavy and Sales approach did not observe students in classrooms, but reported certain common findings that emerged through a literature search. Descriptive studies have an important role in educational research. They have greatly increased our knowledge about what happens in schools. Some of the important books in education have reported studies of this type: Life in Classrooms, by Philip Jackson; The Good High School, by Sara Lawrence Lightfoot; Teachers and Machines: The Classroom Use of Technology Since 1920, by Larry Cuban; A Place Called School, by John Goodlad; Visual Literacy: A Spectrum of Learning, by D. M. Moore and Dwyer; Computers in Education: Social, Political, and Historical Perspectives, by Muffoletto and Knupfer; and Contemporary Issues in American Distance Education, by M. G. Moore. Henry J. Becker's (1986) series of survey reports concerning the implementation of computers into schools across the United States as well as Nancy Nelson Knupfer's (1988) reports about teacher's opinions and patterns of computer usage also fit partially within the realm of descriptive research. Both studies describe categories of data and use statistical analysis to examine correlations between specific variables. Both also go beyond the bounds of descriptive research and conduct further statistical procedures appropriate to their research questions, thus enabling them to make further

Chhavi

Reg. No. 521038725

Page 38 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

recommendations about implementing computing technology in ways to support grassroots change and equitable practices within the schools. Finally, Knupfer's study extended the analysis and conclusions in order to yield suggestions for instructional designers involved with educational computing. The Nature of Descriptive Research The descriptive function of research is heavily dependent on instrumentation for measurement and observation (Borg & Gall, 1989). Researchers may work for many years to perfect such instrumentation so that the resulting measurement will be accurate, reliable, and generalizable. Instruments such as the electron microscope, standardized tests for various purposes, the United States census, Michael Simonson's questionnaires about computer usage, and scores of thoroughly validated questionnaires are examples of some instruments that yield valuable descriptive data. Once the instruments are developed, they can be used to describe phenomena of interest to the researchers. The intent of some descriptive research is to produce statistical information about aspects of education that interests policy makers and educators. The National Center for Education Statistics specializes in this kind of research. Many of its findings are published in an annual volume called Digest of Educational Statistics. The center also administers the National Assessment of Educational Progress (NAEP), which collects descriptive information about how well the nation's youth are doing in various subject areas. A typical NAEP publication is The Reading Report Card, which provides descriptive information about the reading achievement of junior high and high school students during the past 2 decades. On a larger scale, the International Association for the Evaluation of Education Achievement (IEA) has done major descriptive studies comparing the academic achievement levels of students in many different nations, including the United States (Borg & Gall, 1989). Within the United States, huge amounts of information are being gathered continuously by the Office of Technology Assessment, which influences policy concerning technology in education. As a way of offering guidance about the potential of technologies for distance education, that office has published a book called Linking for Learning: A New Course for Education, which offers descriptions of distance education and its potential. There has been an ongoing debate among researchers about the value of quantitative (see 40.1.2) versus qualitative research, and certain remarks have targeted descriptive research as being less pure than traditional experimental, quantitative designs. Rumors abound that young researchers must conduct quantitative research in order to get published in Educational Technology Research and Development and other prestigious journals in the field. One camp argues the benefits of a scientific approach to educational research, thus preferring the experimental, quantitative approach, while the other camp posits the need to recognize the unique human side of educational research questions and thus prefers to use qualitative research methodology. Because descriptive research spans both quantitative and qualitative methodologies, it brings the ability to describe events in greater or less depth as needed, to focus on various elements of different research techniques, and to engage

Chhavi

Reg. No. 521038725

Page 39 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

quantitative statistics to organize information in meaningful ways. The citations within this chapter provide ample evidence that descriptive research can indeed be published in prestigious journals. Descriptive studies can yield rich data that lead to important recommendations. For example, Galloway (1992) bases recommendations for teaching with computer analogies on descriptive data, and Wehrs (1992) draws reasonable conclusions about using expert systems to support academic advising. On the other hand, descriptive research can be misused by those who do not understand its purpose and limitations. For example, one cannot try to draw conclusions that show cause and effect, because that is beyond the bounds of the statistics employed. Borg and Gall (1989) classify the outcomes of educational research into the four categories of description, prediction, improvement, and explanation. They say that descriptive research describes natural or man-made educational phenomena that is of interest to policy makers and educators. Predictions of educational phenomenon seek to determine whether certain students are at risk and if teachers should use different techniques to instruct them. Research about improvement asks whether a certain technique does something to help students learn better and whether certain interventions can improve student learning by applying causal-comparative, correlational, and experimental methods. The final category of explanation posits that research is able to explain a set of phenomena that leads to our ability to describe, predict, and control the phenomena with a high level of certainty and accuracy. This usually takes the form of theories. The methods of collecting data for descriptive research can be employed singly or in various combinations, depending on the research questions at hand. Descriptive research often calls upon quasi-experimental research design (Campbell & Stanley, 1963). Some of the common data collection methods applied to questions within the realm of descriptive research include surveys, interviews, observations, and portfolios. b. Distinguish between research methods & research Methodology. Answer: Research Methods vs Research Design

For those pursuing research in any field of study, both research methods and research design hold great significance. There are many research methods that provide a loose framework or guidelines to conduct a research project. One has to choose a method that suits the requirements of the project and the researcher is comfortable with. On the other hand, research design is the specific framework within which a project is pursued and completed. Many remain confused about the differences between research methods and research design. This article will differentiate between the two and make it easier for research students. It is a fact that despite there being scores of research methods, not every method can perfectly match a particular research project. There are qualitative as well as quantitative research methods. These are generalized outlines that provide a framework and the choice is narrowed down depending upon the area of research that you have chosen. Now that you have selected a particular research method, you need to apply it in the best possible manner to your project. Research design refers to the blue

Chhavi

Reg. No. 521038725

Page 40 of 41

Sikkim Manipal University

- MBA -

MB0050 Research Methodology

print that you prepare using the research method chosen and it delineates the steps that you need to take. Research design thus tells what is to be done at what time. Research design tells how the goals of a research project can be accomplished. Key features of any research design are methodology, collection and assignment of samples, collection and analysis of data along with procedures and instruments to be used. If one is not careful enough while choosing a research design and a research method, the results obtained from a research project may not be satisfactory or may be anomalous. In such a situation, because of a flaw in the research design you may have to look for alternative research methods which would necessitate changes in your research design as well.

Chhavi

Reg. No. 521038725

Page 41 of 41

Anda mungkin juga menyukai