Anda di halaman 1dari 29

ASSIGNMENT

NAME : RAVINDER KUMAR

ROLL NUMBER : 521045814

COURSE: MBA

SEM : III

SPECIALIZATION : MANAGEMENT

RETAIL OPERATIONS

LEARNING CENTER CODE : 02009

LEARNING CENTER NAME : APAR INDIA COLLEGE of Management & Technology

SUBJECT :

Research Methodology

SUBJECT CODE :

MB0050

ASSIGNMENT NO : SET 1

Q.1(a). Differentiate between nominal, ordinal, interval and ratio scales, with an example of each. Answer: The "levels of measurement", or scales of measure are expressions that typically refer to the theory of scale types developed by the psychologist Stanley Smith Stevens. Stevens proposed his theory in a 1946 Science article titled "On the theory of scales of measurement". In that article, Stevens claimed that all measurement in science was conducted using four different types of scales that he called "nominal", "ordinal", "interval" and "ratio". The theory of scale types: Stevens (1946, 1951) proposed that measurements can be classified into four different types of scales. These are shown in the table below as: nominal, ordinal, interval, and Scale Type Permissible Statistics mode, Chi-square Admissible Scale Transformation One to One (equality (=)) Mathematical structure standard set structure (unordered) totally ordered set affine line

nominal (also denoted as categorical) ordinal

median, percentile

Monotonic increasing (order (<)) Positive linear (affine)

interval

mean, standard deviation, correlation, regression, analysis of variance All statistics permitted for interval scales plus the following: geometric mean, harmonic mean, coefficient of variation, logarithms

ratio

Positive similarities (multiplication)

field

Nominal Scale: At the nominal scale, i.e., for a nominal category, one uses labels; for example, rocks can be generally categorized as igneous, sedimentary and metamorphic. For this scale, some valid operations are equivalence and set membership. Nominal measures offer names or labels for certain characteristics. We can use a simple example of a nominal category: first names. Looking at nearby people, we might find one or more of them named Aamir. Aamir is their label; and the set of all first names is a nominal scale. We can only check whether two people have the same name (equivalence) or whether a given name is in on a certain list of names (set membership), but it is impossible to say which name is greater or less than another (comparison) or to measure the difference between two names. Ordinal Scale: Rank-ordering data simply puts the data on an ordinal scale. Ordinal measurements describe order, but not relative size or degree of difference between the items measured. In this scale type, the numbers assigned to objects or events represent the rank order (1st, 2nd, 3rd, etc.) of the entities assessed. A Likert Scale is a type of ordinal scale and may also use names with an order such as: "bad", "medium", and "good"; or "very satisfied", "satisfied", "neutral", "unsatisfied", "very unsatisfied." A simple example follows: Judge's score x Alice's cooking ability Bob's cooking ability Claire's cooking ability Dana's cooking ability Edgar's cooking ability 10 9 8.5 8 5 2 1 0.5 0 -3 Score minus 8 x-8 Tripled score 3x 30 27 25.5 24 15 Cubed score x3 1000 729 614.125 512 125

Since x-8, 3x, and x3 are all monotonically increasing functions, replacing the ordinal judge's score by any of these alternate scores does not affect the relative ranking of the five people's cooking abilities. Each column of numbers is an equally legitimate ordinal scale for describing their abilities. Interval Scale: Quantitative attributes are all measurable on interval scales, as any difference between the levels of an attribute can be multiplied by any real number to exceed or equal another difference. A highly familiar example of interval scale measurement is temperature with the Celsius scale. In this particular scale, the unit of measurement is 1/100 of the temperature difference between the freezing and boiling points of water under a pressure of 1 atmosphere. The "zero point" on an interval scale is arbitrary; and negative values can be used. The formal mathematical term is an affine space The central tendency of a variable measured at the interval level can be represented by its mode, its median, or its arithmetic mean. Statistical dispersion can be measured in most of the usual ways, which just involved differences or averaging, such as range, interquartile range, and standard deviation. Since one cannot divide, one cannot define measures that require a ratio,. One can define standardized moments, since ratios of differences are meaningful, but one cannot define coefficient of variation, since the mean is a moment about the origin, unlike the standard deviation, which is (the square root of) a central moment. Q.1(b). What are the purposes of measurement in social science research? Answer: Three Purposes of Research Three of the most influential and common purposes of research are exploration, description and explanation. Exploration involves familiarizing a researcher with a topic. Exploration satisfies the researcher's curiosity and desire for improved understanding. Exploration tests the feasibility of undertaking a more extensive study. Exploration helps develop the methods that will be used in a study. Description involves describing situations and events through scientific observation. Scientific descriptions are typically more accurate and precise than causal ones. For example, the U. S. Census uses

descriptive social research in its examination of characteristics of the U. S. population. Explanation involves answering the questions of what, where, when, and how. Explanatory studies answer questions of why. For example, an explanatory analysis of the 2002 General Social Survey (GSS) data indicates that 38 percent of men and 30 percent of women said marijuana should be legalized, while 55 percent of liberals and 27 percent of conservatives said the same. Given these statistics, you could start to develop an explanation for attitudes toward marijuana legalization. In addition, further study of gender and political orientation could lead to a deeper explanation of this issue. Q.2.(a). What are the sources from which one may be able to identify research problems? Answer: Sources Of Problems Curiosity: One of the oldest and most common sources of research problems is curiosity. Just as your interest in baseball or gardening may stimulate you to investigate the topic in greater depth, so researchers may investigate phenomena that attract their personal interest. For example, a man named Lipset belonged to the International Typographical Union, and his son's well-known study of democratic decision-making processes was specifically concerned with that organization. The younger Lipset was curious because the union was very large and, according to contemporary social thinking, should have succumbed to an oligarchic decision-making process (a rule of the many by the few), yet appeared to be very democratic. Significant Others: While personal interests or curiosity may be key motivating factors in problem selection, it's a basic assumption in the social sciences that individuals learn from and are influenced by others. Because researchers are usually recruited and trained in universities, the selection of research topics often reflects the influence of teachers or fellow students. Social Problems: In addition to personal curiosity and the influence of others, concern with social problems has been a major source of social research. As discussed in more detail in Chapter 4, one of the first uses of social surveys was to study poverty. Likewise, concerns with the Nazi extermination of 6 million Jews and discrimination in this country have generated a vast body of psychological and social-psychological research on ethnic, gender, sexual orientation, and racial prejudice.

Q.2.(b). Why literature survey is important in research? Answer: Research is made in order to inform people with new knowledge or discovery. However, it is not to be expected that everybody would willingly believe what you are tackling in your whole research paper. Thus, what you can do to make your research more credible will be to support them with other works which have spoken about the same topic that you have for your research. This is where literature review comes in. There are many reasons why literature review is rendered as a significant part of any research or dissertation paper. You may ask what makes it as such if it is only supposed to contain tidbits of other related works. Literature review is the part of the paper where the researcher will be given the opportunity to strengthen your paper for you will be citing what other reliable authors have said about your topic. This will prove that you are not just writing about any random subject but that many others have also poured their thoughts on the topic. You may also ask what makes literature review a necessary part of the paper. This question can be answered by trying not to include the review in your paper. Obviously, it affects the length of your paper but this is not the noticeable part. What would most certainly be lacking is the fact that your paper, without the literature review, only contains all of your opinions about the facts that you have discovered through your research. Thus, how can you further convince the readers, in this case, the committee who will scrutinize your paper? This is the need that is answered only by the literature review. By the mere fact that you are using referencing by citing what more credible people had said about the topic will build a stronger foundation for your paper. With a literature review, you need to establish a clear tie between the works that you have cited and the topic that you are writing about. You should be able to justify the inclusion of a certain work in your review so as to make everything that you have written useful. The more you include useless points in your paper, the more that the committee will think that you have not put in a lot of thinking into your paper. Literature review is also unique from the rest of the paper. While you have to fill most of the paper with your own analysis, in a literature review alone, you will have to write purely about related works of other people. Q.3.(a). What are the characteristics of a good research design? Answer: Characteristics of Research Design Generally a good research design minimizes bias and maximizes the reliability of the data collected and analyzed. The design which gives the smallest experimental error is

reported to be the best design in scientific investigation. Similarly, a design which yields maximum information and provides a opportunity for considering different aspects of a problem is considered to be the most appropriate efficient design . 1.Objectivity: It refers to the findings related to the method of data collection and scoring of the responses. The research design should permit the measuring instrument which are fairly objective in which every observer or judge scoring the performance must precisely give the same report. In other words, the objectivity of the procedure may be judged by the degree of agreement between the final scores assigned to different individuals by more than one independent observer. This ensures the objectivity of the collected data which shall be capable of analysis and drawing generalizations. 2.Reliability: Reliability refers to consistency through out a series of measurements. For eg: if a respondent gives out a response to a particular item, he is expected to give the same response to that item even if he is asked repeatedly. If he is changing his response to the same item, the consistency will be lost. So the researcher should frame the items in a questionnaire in such a way that it provides consistency or reliability. 3.Validity: Any measuring device or instrument is said to be valid when it measures what it is expected to measure. For eg: an intelligence test concucted for measuring the I.Q should measure only the intelligence and nothing else, and the questionnaire shall be framed accordingly. 4.Generalizability: It means how best the data collected from the samples can be utilized for drawing certain generalisations applicable to a large group from which sample is drawn. Thus a research design helps an investigator to generalize his findings provided he has taken due care in defining the population, selecting the sample, deriving appropriate statistical analysis etc. while preparing the research design. Q.3.(b). What are the components of a research design? Answer: Working Design - preliminary plan for beginning a qualitative research project. 1) 2) 3) 4) subjects to be studied sites to be studied time frame for data collection possible variables to be considered

Purposeful Sampling - nonrandom sampling design that selects subjects or sites due to specific characteristics or phenomena under study. Working Hypotheses - hypotheses regarding possible outcomes that guide the research study (also see foreshadowed problems) Data Collection - forms of data collection vary widely depending on the qualitative research design employed. o o o o o o o the researcher must be able to gain access to the situation the researcher must decide on his/her role in data collection participant-observer or observer only interactive or noninteractive data collection the researcher must decide upon the format for data collection interviews observations

Specimen record - a narrative description of one person in a natural situation as seen by skilled observers over a substantial period of time. Record stream of behaviour divide stream into units analyze units

Oral History - interviews conducted with the use of a tape recorder. 1) 2) 3) 4) entire conversation recorded allows examination of inflections emphasize open-ended questions analyzed through listening to tape rather than transcribing it

Data Analysis and Interpretation 1) Data Analysis - begins soon after data collection begins to allow verification, refinement, or restatement of working hypotheses/ foreshadowed problems/or initial theories. 2) Qualitative data analysis is a series of successive approximations toward an accurate description and interpretation of phenomena under study. Coding - process of data reduction through data organization--allows researcher to "see what's there" for purposes of: Sorting/categorization 1) Comparison with initial hypotheses/problem statements (support or contradict earlier ideas)

2) Refinement of design/hypotheses 3) Able to accurately capture information relevant to the research problem 4) Information captured is useful in describing and understanding the phenomenon being studied Coding categories: 1) 2) 3) 4) Are influenced by the purpose and context of the study Are specific to the study May be determined before or after data collection and review Need not be mutually exclusive

Possible Codes: o Setting/Context Codes - reflect the context or setting in which phenomena under study occur (e.g. several different settings-high school/vocational school/etc.- included under school environment) o Process Codes - focus on the sequence of events and how changes occur during the study (e.g. different ways in which students go about dropping out of school) Q.4.(a). Distinguish between Doubles sampling and multiphase sampling. Ans: How double sampling plan works Double and multiple sampling plans were invented to give a questionable lot another chance. For example, if in double sampling the results of the first sample are not conclusive with regard to accepting or rejecting, a second simple is taken. Application of double sampling requires that a first sample of size n1 is taken at random from the (large) lot. The number of defectives is then counted and compared to the first sample acceptance number r1. Denote the number of defectives in sample 1 by d1 and in sample 2 by D2=d1+d2. Now this is compared to the acceptance number a2 and the rejection number r2 of sample 2. In double sampling, r2=a2+1 to ensure a decision on the sample If D2a2, the lot is accepted. If D2r2 the lot is rejected.

Design of double sampling plan The parameters required to construct the OC curve are similar to the single sample case. The two point of interest are (p1,1-) and (p2, , where p1 is the lot fraction defective for plan 1 and p2 is the lot fraction defective for plan 2. As far as the respective sample sizes are concerned, the second sample size must be equal to, or an even multiple of the first sample size. There exist a variety of tables that assist the user in constructing double and multiple sampling plans. The index to these tables is the p2/p1 ratio, where p2>p1. One set of tables taken from the Army Chemical corps Engineering Agency for =.05 and =.10, is given below:

Tables for n1 = n2 accept R = numbers p2/p1 c1 c2 approximation values of pn1 P = .95 for P= .10

11.90 7.54 6.79 5.39 4.65 4.25 3.88

0 1 0 1 2 1 2

1 2 2 3 4 4 5

0.21 0.52 0.43 0.76 1.16 1.04 1.43

2.50 3.92 2.96 4.11 5.39 4.42 5.55

3.63 3.38 3.21 3.09 2.85 2.60 2.44 2.32 2.22 2.12

3 2 3 4 4 5 5 5 5 5

6 6 7 8 9 11 12 13 14 16

1.87 1.72 2.15 2.62 2.90 3.68 4.00 4.35 4.70 5.39

6.78 5.82 6.91 8.10 8.26 9.56 9.77 10.08 10.45 11.41

Tables for n2 = 2n1

Multiple sampling:

Multiple Sampling is an extension of the double sampling concept

Multiple sampling is an extension of double sampling. It involves inspection of 1 to k successive samples as required to reach an ultimate decision. Mil-Std 105D suggests k = 7 is a good number. Multiple sampling plans are usually presented in tabular form:

Procedure for multiple sampling

The procedure commences with taking a random sample of size n1from a large lot of size N and counting the number of defectives, d1. if d1 a1 the lot is accepted. if d1 r1 the lot is rejected. if a1 < d1 < r1, another sample is taken. If subsequent samples are required, the first sample procedure is repeated sample by sample. For each sample, the total number of defectives found at any stage, say stage i, is

This is compared with the acceptance number ai and the rejection number ri for that stage until a decision is made. Sometimes acceptance is not allowed at the early stages of multiple sampling; however, rejection can occur at any stage.

Q.4.(b). What is replicated or interpenetrating sampling? Answer: The experiment should be reaped more than once. Thus, each treatment is applied in many experimental units instead of one. By doing so, the statistical accuracy of the experiments is increased. For example, suppose we are to examine the effect of two varieties of rice. For this purpose we may divide the field into two parts and grow one variety in one part and the other variety in the other part. We can compare the yield of the two parts and draw conclusion on that basis. But if we are to apply the principle of replication to this experiment, then we first divide the field into several parts, grow one variety in half of these parts and the other variety in the remaining parts. We can collect the data yield of the two varieities and draw conclusion by comaring the same. The result so obtained will be more reliable in comparison to the conclusion we draw without applying the principle of replication. The entire experiment can even by repeated several times for better

results. Consequently replication does not present any difficulty, but computationally it does. However, it should be remembered that replication is introduced in order to increase the precision of a study, that is to say, to increase the accuracy with which the main effects and interations can be estimated.

Q.5.(a). How is secondary data useful to researcher? Answer:Secondary data is classified in terms of its source either internal or external. Internal, or in-house data, is secondary information acquired within the organization where research is being carried out. External secondary data is obtained from outside sources. The two major advantages of using secondary data in market research are time and cost savings.

The secondary research process can be completed rapidly generally in 2 to 3 week. Substantial useful secondary data can be collected in a matter of days by a skillful analyst. When secondary data is available, the researcher need only locate the source of the data and extract the required information. Secondary research is generally less expensive than primary research. The bulk of secondary research data gathering does not require the use of expensive, specialized, highly trained personnel. Secondary research expenses are incurred by the originator of the information.

Internal data sources: Internal secondary data is usually an inexpensive information source for the company conducting research, and is the place to start for existing operations. Internally generated sales and pricing data can be used as a research source. The use of this data is to define the competitive position of the firm, an evaluation of a marketing strategy the firm has used in the past, or gaining a better understanding of the companys best customers. 1. Sales and marketing reports. These can include such things as:

Type of product/service purchased Type of end-user/industry segment Method of payment Product or product line Sales territory

Salesperson Date of purchase Amount of purchase Price Application by product Location of end-user

2. Accounting and financial records. These are often an overlooked source of internal secondary information and can be invaluable in the identification, clarification and prediction of certain problems. Accounting records can be used to evaluate the success of various marketing strategies such as revenues from a direct marketing campaign. External data sources There is a wealth of statistical and research data available today. Some sources are:

Federal government Provincial/state governments Statistics agencies Trade associations General business publications Magazine and newspaper articles Annual reports Academic publications Library sources Computerized bibliographies Syndicated services.

Q.5.(b). What are the criteria used for evaluation of secondary data? Answer: When a researcher wants to use secodary data for his research, he should evaluate them before deciding to use them. 1.Data Pertinence: The first consideration in evaluation is to examine the pertinence of the available secondary data to the research problem under study. The following questions should be considered. a) What are the definitions and classifications employed? consistent? Are they

b) What are the measurements of variables used? What is the degree to which they conform to the requirements of our research?

2. Data Quality: If the researcher is convinced about the available secondary data for his needs, the next step is to examine the quality of the data. The quality of data refers to their accuracy, reliability and completeness. The assurance and reliability of the available secondary data depends on the organization which collected them and the purpose for which they were collected. What is the authority and prestige of the organization? Is it well recognized? Is it noted for reliability? It is capable of collecting reliable data? Does it use trained and well qualified investigators? 3. Data Completeness: The completeness refers to the actual converage of the published data. This depends on the methodology and sampling design adopted by the original organization. Is the methodology sound? Is the sample sixe samll or large? Is the sampling method appropriate? Q.6. What are the differences between observation and interviewing as methods of data collection? Give two specific examples of situations where either observation or interviewing would be more appropriate. Answer: Interviews

In interviews information is obtained through inquiry and recorded by enumerators. Structured interviews are performed by using survey forms, whereas open interviews are notes taken while talking with respondents. The notes are subsequently structured (interpreted) for further analysis. Open-ended interviews, which need to be interpreted and analysed even during the interview, have to be carried out by welltrained observers and/or enumerators. Open-ended interviews Open-ended interviews cover a variety of data-gathering activities, including a number of social science research methods. o o o o o Focus groups Panel surveys Structured interview Interview approach for sample catch, effort and prices, the Interview approach for boat/gear activities

Direct observations o Observers o At-sea observers o Observers at landing sites, processing plants and markets Complete landings of all catch in relation to a vessel's trip (i.e. emptying of holds) is preferred since records can then be matched against logsheets. However, in some circumstances off-loading in harbours, at the dock or at sea may only be partial, some being retained on board until the next off-loading. In this case, records should be maintained of both catch landed and retained on board.

ASSIGNMENT

NAME :

RAVINDER KUMAR

ROLL NUMBER : 521045814

COURSE: MBA

SEM : III

SPECIALIZATION : RETAIL OPERATIONS MANAGEMENT

LEARNING CENTER CODE : 02009

LEARNING CENTER NAME : APAR INDIA COLLEGE of Management & Technology

SUBJECT :

Research Methodology

SUBJECT CODE :

MB0050

ASSIGNMENT NO : SET 2

Q.1.(a). Explain the General characteristics of observation. Answer: Every moment we are exposed to some kind of events or occurrences. If we try to frame some definite opinion about the events we come across, we have to observe the instances keenly. It shows that three factors are involved in the case of an observation. There must be some object to be observed, the sense organs to observe the object and the mind to become aware of it. This process is repeated for several times in order to arrive at a conclusion. Characteristics: 1) Observation is the case of regulated perception of events. Observations are made by help of sense organs. So it is basically perceptual. Perception may be either external or internal. Perception of natural events or occurrences is external perception. To know something directly by introspection without using the sense organs is called internal perception. Feeling of sorrow, joy, happiness etc. is internal perception. A vast nature is present before us. Every moment we come across some event of nature. When similar types of events are observed in repeated manner, one feels to find out an explanation with regard to the functioning of nature. That helps us to distinguish the random or casual perception from regulated perception. 2) Observation should be systematic and selective. Observation excludes the cases of careless and stray perceptions. It should be systematic and selective. When the purpose of observation is decided we select those instances, which have got relevance with the purpose. Suppose we want to observe the colour of the crows. A prejudiced mind cannot make observation neutral. If a person is biased, then his observation will not be true or objective. Joyce has pointed out that very often observations are not free from subjective influences. There can be three types of subjective influences of the observer, namely, intellectual, physical and moral. The germs are not visible to naked eyes. Many stars and planets are not visible to us. A colour blind man cannot observe colours perfectly.

Q.2.(b). What is the Utility of Observation in Business Research? Answer: If our goal is to find an idea or business opportunity, we may need information on the unmet needs of consumers, the most popular products, the most profitable businesses, new tastes, new fashions, etc. In this case, using the observational technique would be to go to markets or shopping areas and see what are the most popular products or services, which are the products ordered, but are not found, look at the products that could be replaced by others who might have a better reception, etc.. If our goal is to analyze our target, we may need information about your preferences, tastes, desires, behaviors, habits, etc. In this case, using the observational technique would be to visit places where our target audience frequents, and watch them walk around the area, review products, ask questions, and choose certain products. If our goal is to analyze the competition, we may need information about their products, processes, personnel, strategies, etc. If our goal is to evaluate the feasibility of leasing a new location for our business, we may need information on customer traffic, local accessibility, local visibility, etc. In this case, using the observational technique could be to wander around the local area as possible, observe customer traffic in the area throughout the day, the existence of local competition and the number of visitors they have, the environment in the area, etc. The advantages of using the observation technique is that we can obtain accurate information that could not otherwise obtain, for example, information on spontaneous behaviors that occur only in everyday life and its natural resources, or information that people could not or unwilling to provide us for several reasons. Also, another advantage is that the technique is inexpensive and easy to apply. However, disadvantages of this technique are the inability to identify emotions, attitudes, or the motivations that lead a consumer to perform an act. It is always advisable to use it alongside other research techniques.

Q. 2.(a). Briefly explain Interviewing techniques in Business Research? Answer: Interviewing a person requires good communication skills, presence of mind, and general sense of logic. Interviews can be of different types like business interview, research interview, media interview and so on. Before the Interview There are a few pointers which one must keep in mind before you actually interview a candidate. Here are some of the basic things that you need to keep in mind before the interview: Make sure you inform the candidate about the venue and timing of the interview well in advance. Also it is considered polite to convey the approximate duration of the interview to the candidate. Before the interview you, make sure you have a clear idea of the qualities and technical skills that you are looking out for. Plan the structure of the interview, determine if you want to keep it a rigorous question answer session, or want the candidate to speak his mind about general issues. If you want to keep it a question answer session, make sure you list down all the probable questions and even if you want to make it an impromptu affair, it is better to list down the general points that you want to address during the interview. In case you already have the candidates resume, reference letters or any other documents which have been submitted beforehand, make it a point to go through the documents. It will give you an idea about the candidates educational qualification, work experience and other useful information. This will help you answer better and more relevant questions. At the end of the interview tell the candidate the period The Actual Interview Greet the person with a smile and a professional handshake. Make sure the candidate is comfortable. Do not intimidate the candidate. Make sure to mention things from the candidates resume or other submitted material that you find impressive or problematic. Do not ask personal questions during a job interview. Listen carefully while your candidate speaks. Ask questions about the things that the candidate mentions during the interview.

After the Interview After the interview, make sure you call up the candidate within the promised time frame. Inform the decision, without beating around the bush. In case you have rejected the candidate, be polite while conveying the message, and also assure that if in future the company needs their services, you will contact them for sure. Remember that interviewing a person is a tough job. Keep all the above points in mind and rest assured that the interview will be a smooth sailing Q.2.(b). What are the problems encountered in Interview? Answer: Problems Encountered

As in most surveys, difficulties were encountered during the data collection period, and several of these were similar across countries. One of the common problems encountered by the research teams was the limited time allocated, as the questionnaires were quite long (it took approximately 1-2 hours to complete one questionnaire). For instance, in Thailand, some of the medium and large farms refused to participate without prior appointment, and some gave vague answers and figures, underestimating values especially concerning sales and profits (showing business loss over the years), because of fear that revealed information might be used against them as grounds for tax fraudulence charges. Some were hesitant to share information because their integrators prohibit them from exposing it (particularly data on capital investment, costs, and returns) to the interviewers. In such cases, interviewers had to look for replacements in order to meet the targeted number of respondents. It was difficult to find a sufficient sample for different types of management, such as independents and contracts, since in some countries one or the other dominates. In Thailand, independent layer producers dominated; in India, there were no contracts for layers. In the Philippines, hog contract growers were concentrated in one or two provinces, large independent commercial broiler farms were difficult to identify and locate because of outdated lists, and small independents were extremely small (up to only 100 birds). In Brazil, contract and commercial broiler growers were prevalent; while in India, there was no up-to-date information on the population of poultry farms, making it difficult to select appropriate samples.

Q.3.(a). What are the various steps in processing of data? Answer: 5 Steps To Data Processing

Data is an integral part of all business processes. It is the invisible backbone that supports all the operations and activities within a business. Without access to relevant data, businesses would get completely paralyzed. This is because quality data helps formulate effective business strategies and fruitful business decisions. Here are the 5 steps that are included in data processing: Editing There is a big difference between data and useful data. While there are huge volumes of data available on the internet, useful data has to be extracted from the huge volumes of the same. Extracting relevant data is one of the core procedures of data processing. When data has been accumulated from various sources, it is edited in order to discard the inappropriate data and retain relevant data. Coding Even after the editing process, the available data is not in any specific order. To make it more sensible and usable for further use, it needs to be aligned into a particular system. The method of coding ensures just that and arranges data in a comprehendible format. The process is also known as netting or bucketing. Data Entry After the data has been properly arranged and coded, it is entered into the software that performs the eventual cross tabulation. Data entry professionals do the task efficiently. Validation After the cleansing phase, comes the validation process. Data validation refers to the process of thoroughly checking the collected data to ensure optimal quality levels. All the accumulated data is double checked in order to ensure that it contains no inconsistencies and is utterly relevant. Tabulation This is the final step in data processing. The final product i.e. the data is tabulated and arranged in a systematic format so that it can be further analyzed. All these processes make up the complete data processing activity which ensures the said data is available for access.

Q.3(b). How is data editing is done at the Time of Recording of Data? Answer: Processing of data is editing of the data instruments. Editing is a process of checking to detect and correct errors and omissions. Data editing happens at two stages, one at the time of recording of the data and second at the time of analysis of data. Data Editings at the Time of Recording of Data Document editing and testing of the data at the time of data recording is done considering the following questions in mind. 1) Do the filters agree or are the data inconsistent? 2) Have missing values been set to values, which are the same for all research questions? 3) Have variable descriptions been specified? 4) Have lables for variable names and value lables been defined and written? All editing and cleaning steps are documented, so that, the redefinition of variables or later analytical modification requirements could be easily incorporated into the data sets.

Q.4.(a). What are the fundamental of frequency Distribution? Answer: The fundamental frequency, often referred to simply as the fundamental and abbreviated f0 or F0, is defined as the lowest frequency of a periodic waveform. In terms of a superposition of sinusoids (e.g. Fourier series), the fundamental frequency is the lowest frequency sinusoidal in the sum. We can show a waveform is periodic by finding some period T for which the following equation is true: x(t) = x(t + T) = x(t + 2T) = x(t + 3T) = ... Where x(t) is the function of the waveform. This means that for multiples of some period T the value of the signal is always the same. The lowest value of T for which this is true is called the fundamental period (T0) and thus the fundamental frequency (F0) is given by the equation:

Where F0 is the fundamental frequency and T0 is the fundamental period. The fundamental frequency of a sound wave in a tube with a single CLOSED end can be found using the equation:

L can be found using the equation: (lambda) can be found using the following equation:

The fundamental frequency of a sound wave in a tube with either both ends OPEN or both ends CLOSED can be found using the equation

L can be found using the equation: The wavelength, which is the distance in the medium between the beginning and end of a cycle, is found using the equation: Where: F0 = fundamental Frequency L = length of the tube v = velocity of the sound wave = wavelength At 20 C (68 F) the speed of sound in air is 343 m/s (1129 ft/s). This speed is temperature dependent and does increase at a rate of 0.6 m/s for each degree Celsius increase in temperature (1.1 ft/s for every increase of 1 F). The velocity of a sound wave at different temperatures:

v = 343.2 m/s at 20 C v = 331.3 m/s at 0 C

[edit] Mechanical systems Consider a beam, fixed at one end and having a mass attached to the other; this would be a single degree of freedom (SDoF) oscillator. Once set into motion it will oscillate at its natural frequency. For a single degree of freedom oscillator, a system in which the motion can be described by a single coordinate, the natural frequency depends on two system properties: mass and stiffness. The radian frequency, n, can be found using the following equation:

Where: k = stiffness of the beam m = mass of weight n = radian frequency (radians per second) From the radian frequency, the natural frequency, fn, can be found by simply dividing n by 2. Without first finding the radian frequency, the natural frequency can be found directly using:

Where: fn = natural frequency in hertz (cycles/second) k = stiffness of the beam (Newtons/Meter or N/m) m = mass at the end (kg) while doing the modal analysis of structures and mechanical equipments, the frequency of 1st mode is called fundamental frequency.

Q.4.(b). What are the types and general rules for graphical representation of data? Answer: General Rules for Drawing Graphs, Diagrams and Maps 1. Selection of a Suitable Graphical Method Each characteristic of the data can only be suitably represented by an appropriate graphical method. For example,

To show the data related to the temperature or growth of population between different periods in time line graph are used. Similarly, bar diagrams are used for showing rainfall or the production of commodities. The population distribution, both human and livestock, or the distribution of the crop producing areas are shown by dot maps. The population density can be shown by choropleth maps. Thus, it is necessary and important to select suitable graphical method to represent data. 1. Selection of Suitable Scale Each diagram or map is drawn to a scale which is used to measure the data. The scale must cover the entire data that is to be represented. The scale should neither be too large nor too small.

Q.5. Strictly speaking, would case studies be considered as scientific research? Why or why not? Answer: A case study is an intensive analysis of an individual unit (e.g., a person, group, or event) stressing developmental factors in relation to context. The case study is common in social sciences and life sciences. Case studies may be descriptive or explanatory. The latter type is used to explore causation in order to find underlying principles. They may be prospective, in which criteria are established and cases fitting the criteria are included as they become available, or retrospective, in which criteria are established for selecting cases from historical records for inclusion in the study. Case selection and structure of the case study An average, or typical, case is often not the richest in information. In clarifying lines of history and causation it is more useful to select subjects that offer an interesting, unusual or particularly revealing set of circumstances. A case selection that is based on representativeness will seldom be able to produce these kinds of insights. When selecting a subject for a case study, researchers will therefore use informationoriented sampling, as opposed to random sampling. Outlier cases (that is, those which are extreme, deviant or atypical) reveal more information than the putatively representative case. Alternatively, a case may be selected as a key case, chosen because of the inherent interest of the

case or the circumstances surrounding it. Or it may be chosen because of researchers' in-depth local knowledge; where researchers have this local knowledge they are in a position to soak and poke as Fenno puts it, and thereby to offer reasoned lines of explanation based on this rich knowledge of setting and circumstances. Three types of cases may thus be distinguished: 1. Key cases 2. Outlier cases 3. Local knowledge cases Generalizing from case studies A critical case can be defined as having strategic importance in relation to the general problem. A critical case allows the following type of generalization, If it is valid for this case, it is valid for all (or many) cases. In its negative form, the generalization would be, If it is not valid for this case, then it is not valid for any (or only few) cases. Galileos view continued to be subjected to doubt, however, and the Aristotelian view was not finally rejected until half a century later, with the invention of the air pump. The air pump made it possible to conduct the ultimate experiment, known by every pupil, whereby a coin or a piece of lead inside a vacuum tube falls with the same speed as a feather. After this experiment, Aristotles view could be maintained no longer. What is especially worth noting, however, is that the matter was settled by an individual case due to the clever choice of the extremes of metal and feather. The Case Study Paradox Case studies have existed as long as recorded history. Much of what is known about the empirical world has been produced by case study research, and many of the classics in a long range of disciplines are case studies, including in psychology, sociology, anthropology, history, education, economics, political science, management, geography, biology, and medical science. Half of all articles in the top political science journals use case studies, for instance. But there is a paradox here, as argued by Oxford professor Bent Flyvbjerg. At the same time that case studies are extensively used and have produced canonical works, one may observe that the case study is generally held in low regard, or is simply ignored, within the academy. Statistics on courses offered in universities confirm this. It has been argued that the case

study paradox exists because the case study is widely misunderstood as a research method. Flyvbjerg argues that by clearing the misunderstandings about the case study, the case study paradox may be resolved. Q.6. (a). Analyse the case study and descriptive approach to research. Answer: Descriptive research does not fit neatly into the definition of either quantitative or qualitative research methodologies, but instead it can utilize elements of both, often within the same study. The term descriptive research refers to the type of research question, design, and data analysis that will be applied to a given topic. Descriptive statistics tell what is, while inferential statistics try to determine cause and effect. Descriptive research can be either quantitative or qualitative. It can involve collections of quantitative information that can be tabulated along a continuum in numerical form, such as scores on a test or the number of times a person chooses to use a-certain feature of a multimedia program, or it can describe categories of information such as gender or patterns of interaction when using technology in a group situation. On the other hand, descriptive research might simply report the percentage summary on a single variable. Examples of this are the tally of reference citations in selected instructional design and technology journals by Anglin and Towers (1992); Barry's (1994) investigation of the controversy surrounding advertising and Channel One; Lu, Morlan, Lerchlorlarn, Lee, and Dike's (1993) investigation of the international utilization of media in education (1993); and Pettersson, Metallinos, Muffoletto, Shaw, and Takakuwa's (1993) analysis of the use of verbovisual information in teaching geography in various countries. The Nature of Descriptive Research The descriptive function of research is heavily dependent on instrumentation for measurement and observation (Borg & Gall, 1989). Researchers may work for many years to perfect such instrumentation so that the resulting measurement will be accurate, reliable, and generalizable. A typical NAEP publication is The Reading Report Card, which provides descriptive information about the reading achievement of junior high and high school students during the past 2 decades.

The methods of collecting data for descriptive research can be employed singly or in various combinations, depending on the research questions at hand. Descriptive research often calls upon quasi-experimental research design (Campbell & Stanley, 1963). Some of the common data collection methods applied to questions within the realm of descriptive research include surveys, interviews, observations, and portfolios.

Q.6.(b). Distinguish between research methods & research Methodology. Answer: Research Methods vs Research Design

For those pursuing research in any field of study, both research methods and research design hold great significance. There are many research methods that provide a loose framework or guidelines to conduct a research project. One has to choose a method that suits the requirements of the project and the researcher is comfortable with. On the other hand, research design is the specific framework within which a project is pursued and completed. Many remain confused about the differences between research methods and research design. It is a fact that despite there being scores of research methods, not every method can perfectly match a particular research project. There are qualitative as well as quantitative research methods. These are generalized outlines that provide a framework and the choice is narrowed down depending upon the area of research that you have chosen. Now that you have selected a particular research method, you need to apply it in the best possible manner to your project. Research design refers to the blue print that you prepare using the research method chosen and it delineates the steps that you need to take. Research design thus tells what is to be done at what time. Research design tells how the goals of a research project can be accomplished. Key features of any research design are methodology, collection and assignment of samples, collection and analysis of data along with procedures and instruments to be used. If one is not careful enough while choosing a research design and a research method, the results obtained from a research project may not be satisfactory or may be anomalous. In such a situation, because of a flaw in the research design you may have to look for alternative research methods which would necessitate changes in your research design as well.

Anda mungkin juga menyukai