Anda di halaman 1dari 24

Management Programme

SOLVED ASSIGNMENT
2010

QUANTITATIVE TECHNIQUES IN MANAGEMENT

Amity University

Spl. Note:
Unauthorized copying, selling and redistribution of the content are strictly prohibited.
This material is provided for reference only

QUANTITATIVE TECHNIQUES IN MANAGEMENT


Assignment A

Question 1: How has quantitative analysis changed the current scenario in


the management world today?
Answer:

Quantitative analysis requires the representation of the problem using a


mathematical model. Mathematical modeling is a critical part of the
quantitative approach to decision making. Quantitative factors can be
measured in terms of money or quantitative units. Examples are
incremental revenue, added cost, and initial outlay.

Qualitative factors in decision making are the factors relevant to a


decision that are difficult to measure in terms of money. Qualitative
factors may include: (1) effect on employee morale, schedule and other
internal elements; (2) relationship with and commitments to suppliers;
(3) effect on present and future customers; and (4) long-term future
effect on profitability. In some decision-making situations, qualitative
aspects are more important than immediate financial benefit from a
decision.

Different Statistical Techniques


Measures of Central Tendency: For proper understanding of
quantitative data, they should be classified and converted into a
frequency distribution. This type of condensation of data reduces their

bulk and gives a clear picture of their structure. If you want to know any
specific characteristics, of the given data or if frequency distribution of
one set of data to be compared with another, then it is necessary that the
frequency distribution itself must be summarized and condensed in such a
manner that it must help us to make useful inferences about the data and
also provide yardstick for comparing different sets of data.
Measures of Dispersion: Measures of dispersion would tell you the
number of values, which are substantially different from the mean,
median or mode. The commonly used measures of dispersion are range,
mean deviation and standard deviation.
Correlation: Correlation coefficient measures the degree to which the
change in one variable (the dependent variable) is associated with change
in the other variable (Independent one). For example, as a marketing
manager, you would like to know if there is any relation between the
amounts of money you spend on advertising and the sales you achieve.
Here, sales are the dependent variable and advertising budget is the
independent variable. Correlation coefficient, in this case, would tell you
the extent of relationship between these two variables, whether the
relationship is directly proportional (i.e. increase or decrease in
advertising is associated with increase or decrease in sales) or it is an
inverse relationship (i.e. increasing advertising is associated with
decrease in sales and vice-versa) or there is no relationship between the
two variables.
Regression Analysis: Regression analysis includes any techniques for
modeling and analyzing several variables, when the focus is on the
relationship between a dependent variable and one or more independent
variables. Using this technique you can predict the dependent variables on
the basis of the independent variables. In 1970, NCAER (National Council
of Applied and Economic Research) predicted the annual stock of scooters
using a regression model in which real personal disposable income and
relative weighted price index of scooters were used as independent
variable.
Time Series Analysis: With time series analysis, you can isolate and
measure the separate effects of these forces on the variables. Examples
of these changes can be seen, if you start measuring increase in cost of
living, increase of population over a period of time, growth of agricultural
food production in India over the last fifteen years, seasonal requirement
of items, impact of floods, strikes, and wars so on.
Index Numbers: An index number is an economic data figure reflecting
price or quantity compared with a standard or base value. The base
usually equals 100 and the index number is usually expressed as 100
times the ratio to the base value. For example, if a commodity costs twice

as much in 1970 as it did in 1960, its index number would be 200 relative
to 1960. Index numbers are used especially to compare business activity,
the cost of living, and employment. They enable economists to reduce
unwieldy business data into easily understood terms.
Sampling and Statistical Inference: In many cases due to shortage of
time, cost or non availability of data, only limited part or section of the
universe (or population) is examined to (a) get information about the
universe as clearly and precisely as possible, and (b) determine the
reliability of the estimates. This small part or section selected from the
universe is called the sample, and the process of selections such a section
(or past) is called sampling.
Example: Site selection process (quantitative and qualitative
factors)
While quantitative factors have been and will continue to be very
important in the site selection process, qualitative factors are also critical
in order to ensure that the company makes the best decision. What are
the most important quantitative and qualitative factors evaluated by site
selection advisors and companies when making a decision regarding the
location of a new or expanded operation? The list will vary depending on
type of facility (i.e. manufacturing, logistics, research & technology,
office), but most factors apply to all forms of projects. Below is a
summary of the most important quantitative and qualitative factors
considered by companies.
Quantitative Factors
1. Property Tax Rates
2. Corporate Income Tax Rates
3. Sales Tax Rates
4. Real Estate Costs
5. Utility Rates
6. Average Wage/Salary Levels
7. Construction Costs
8. Workers Compensation Rates
9. Unemployment Compensation Rates
10. Personal Income Tax Rates
11. Industry Sector Labor Pool Size
12. Infrastructure Development Costs
13. Education Achievement Levels
14. Crime Statistics
15. Frequency of Natural Disasters
16. Cost of Living Index
17. Number of Commercial Flights to Key Markets
18. Proximity to Major Key Geographic Markets
19. Unionization Rate/Right to Work versus Non-Right to Work State
20. Population of Geographic Area

Qualitative Factors
1. Level of Collaboration with Government, Educational and Utility Officials
2. Sports, Recreational and Cultural Amenities
3. Confidence in Ability of All Parties to Meet Companys Deadlines
4. Political Stability of Location
5. Climate
6. Availability of Quality Healthcare
7. Chemistry of Project Team with Local and State Officials
8. Perception of Quality of Professional Services Firms to Meet the
Companys Needs
9. Predictability of Long-term Operational Costs
10. Ability to Complete Real Estate Due Diligence Process Quickly
Another important part of the site selection evaluation process relates to
the weighting of the key quantitative and qualitative factors. Depending
on the type of project, factors will be weighted differently. As an example,
for a new manufacturing facility project, issues such as utility rates, real
estate costs, property tax rates, collaboration with governmental entities,
and average hourly wage rates may be weighted more heavily. By
contract, for a new office facility factors such as real estate costs, number
of commercial flights, crime statistics, climate and industry sector labor
pool size may be more important.
Every project is unique and must be evaluated based upon its own
individual set of circumstances.

Question 2: What are sampling techniques? Briefly explain the cluster


sampling technique.
Answer:

A sample is a group of units selected from a larger group (the population). By


studying the sample, one hopes to draw valid conclusions about the larger group.

A sample is generally selected for study because the population is too large
to study in its entirety. The sample should be representative of the general
population. This is often best achieved by random sampling. Also, before
collecting the sample, it is important that one carefully and completely
defines the population, including a description of the members to be
included.
A common problem in business statistical decision-making arises when we
need information about a collection called a population but find that the cost
of obtaining the information is prohibitive. For instance, suppose we need to
know the average shelf life of current inventory. If the inventory is large,
the cost of checking records for each item might be high enough to cancel
the benefit of having the information. On the other hand, a hunch about the

average shelf life might not be good enough for decision-making purposes.
This means we must arrive at a compromise that involves selecting a small
number of items and calculating an average shelf life as an estimate of the
average shelf life of all items in inventory. This is a compromise, since the
measurements for a sample from the inventory will produce only an
estimate of the value we want, but at substantial savings. What we would
like to know is how "good" the estimate is and how much more will it cost
to make it "better". Information of this type is intimately related to sampling
techniques.
Cluster sampling can be used whenever the population is homogeneous but
can be partitioned. In many applications the partitioning is a result of
physical distance. For instance, in the insurance industry, there are small"
clusters" of employees in field offices scattered about the country. In such a
case, a random sampling of employee work habits might not required travel
to many of the" clusters" or field offices in order to get the data. Totally
sampling each one of a small number of clusters chosen at random can
eliminate much of the cost associated with the data requirements of
management.
Question 3: What is the significance of Regression Analysis? How does it
help a manager in the decision making process?
Answer:
Regression analysis is a powerful technique for studying relationship between
dependent variables (i.e., output, performance measure) and independent variables
(i.e., inputs, factors, decision variables). Summarizing relationships among the
variables by the most appropriate equation (i.e., modeling) allows us to predict or
identify the most influential factors and study their impacts on the output for any
changes in their current values.
Unlike the deterministic decision-making process, such as linear optimization by
solving systems of equations, Parametric systems of equations and in decision
making under pure uncertainty, the variables are often more numerous and more
difficult to measure and control. However, the steps are the same. They are:
1.
2.
3.
4.

Simplification
Building a decision model
Testing the model
Using the model to find the solution:
It is a simplified representation of the actual situation
It need not be complete or exact in all respects
It concentrates on the most essential relationships and ignores the less
essential ones.
It is more easily understood than the empirical (i.e., observed)
situation, and hence permits the problem to be solved more readily
with minimum time and effort.

5. It can be used again and again for similar problems or can be modified.
Fortunately the probabilistic and statistical methods for analysis and decision making
under uncertainty are more numerous and powerful today than ever before. The
computer makes possible many practical applications. A few examples of business
applications are the following:

An auditor can use random sampling techniques to audit the accounts


receivable for clients.
A plant manager can use statistical quality control techniques to assure the
quality of his production with a minimum of testing or inspection.
A financial analyst may use regression and correlation to help understand the
relationship of a financial ratio to a set of other variables in business.
A market researcher may use test of significace to accept or reject the
hypotheses about a group of buyers to which the firm wishes to sell a
particular product.
A sales manager may use statistical techniques to forecast sales for the
coming year.

Question 4 Explain the following terms in detail (give examples where


necessary): (a.) Arithmetic mean
(b.) Harmonic mean
(c.) Geometric mean
(d.) Median
(e.) Mode
Answer:

(a.) Arithmetic Mean:

The arithmetic mean (or the average, simple mean) is computed by summing
all numbers in an array of numbers (xi) and then dividing by the number of
observations (n) in the array.
Mean =

Xi /n,

the sum is over all i's.

The mean uses all of the observations, and each observation affects the
mean. Even though the mean is sensitive to extreme values; i.e., extremely
large or small data can cause the mean to be pulled toward the extreme
data; it is still the most widely used measure of location. This is due to the
fact that the mean has valuable mathematical properties that make it
convenient for use with inferential statistical analysis. For example, the sum
of the deviations of the numbers in a set of data from the mean is zero, and
the sum of the squared deviations of the numbers in a set of data from the
mean is the minimum value.

(b) Harmonic Mean:

The harmonic mean (H) is another specialized average, which is useful in


averaging variables expressed as rate per unit of time, such as mileage per
hour, number of units produced per day. The harmonic mean (H) of n nonzero numerical values x(i) is: H = n/[ (1/x(i)].
An Application: Suppose 4 machines in a machine shop are used to produce
the same part. However, each of the four machines takes 2.5, 2.0, 1.5, and
6.0 minutes to make one part, respectively. What is the average rate of
speed?
The harmonic means is: H = 4/[(1/2.5) + (1/2.0) + 1/(1.5) + (1/6.0)] =
2.31 minutes.
If all machines working for one hour, how many parts will be produced?
Since four machines running for one hour represent 240 minutes of operating
time, then: 240 / 2.31 = 104 parts will be produced.

(C.) The Geometric Mean:


The geometric mean (G) of n non-negative numerical values is the nth root of
the product of the n values.
If some values are very large in magnitude and others are small, then the
geometric mean is a better representative of the data than the simple average. In
a "geometric series", the most meaningful average is the geometric mean (G).
The arithmetic mean is very biased toward the larger numbers in the series.
An Application: Suppose sales of a certain item increase to 110% in the first
year and to 150% of that in the second year. For simplicity, assume you sold
100 items initially. Then the number sold in the first year is 110 and the number
sold in the second is 150% x 110 = 165. The arithmetic average of 110% and
150% is 130% so that we would incorrectly estimate that the number sold in the
first year is 130 and the number in the second year is 169. The geometric mean
of 110% and 150% is G = (1.65)1/2 so that we would correctly estimate that we
would sell 100 (G)2 = 165 items in the second year.
(D.) Median:

Median: The median is the middle value in an ordered array of observations.


If there is an even number of observations in the array, the median is the
average of the two middle numbers. If there is an odd number of data in the
array, the median is the middle number.
The median is often used to summarize the distribution of an outcome. If the
distribution is skewed, the median and the interquartile range (IQR) may be

better than other measures to indicate where the observed data are
concentrated.
Generally, the median provides a better measure of location than the mean
when there are some extremely large or small observations; i.e., when the
data are skewed to the right or to the left. For this reason, median income is
used as the measure of location for the U.S. household income. Note that if
the median is less than the mean, the data set is skewed to the right. If the
median is greater than the mean, the data set is skewed to the left. For
normal population, the sample median is distributed normally with m = the
mean, and standard error of the median (p/2) times standard error of the
mean.
The mean has two distinct advantages over the median. It is more stable,
and one can compute the mean based of two samples by combining the two
means.
(D.) Mode:
The mode is the most frequently occurring value in a set of observations.
Why use the mode? The classic example is the shirt/shoe manufacturer who
wants to decide what sizes to introduce. Data may have two modes. In this
case, we say the data are bimodal, and sets of observations with more than
two modes are referred to as multimodal. Note that the mode is not a helpful
measure of location, because there can be more than one mode or even no
mode.
When the mean and the median are known, it is possible to estimate the
mode for the unimodal distribution using the other two averages as follows:
Mode 3(median) - 2(mean)
This estimate is applicable to both grouped and ungrouped data sets.

Question 5: Explain the classical approach to the probability theory. Also


explain the limitation of classical definition of probability.
Answer:

The classical approach to probability is to count the number of favorable


outcomes, the number of total outcomes (outcomes are assumed to be
mutually exclusive and equiprobable), and express the probability as a ratio
of these two numbers. Here, "favorable" refers not to any subjective value
given to the outcomes, but is rather the classical terminology used to
indicate that an outcome belongs to a given event of interest. What is meant
by this will be made clear by an example, and formalized with the
introduction of axiomatic probability theory.

Classical definition of probability


If the number of outcomes belonging to an event E is NE, and the total
number of outcomes is N, then the probability of event E is defined as.

Limitation of classical definition of probability


There are basically four types of probabilities, each with its limitations. None of these
approaches to probability is wrong, per se, but some are more useful or more
general than others.
In everyday speech, we express our beliefs about likelihoods of events using the
same terminology as in probability theory. Often, this has nothing to do with any
formal definition of probability, rather it is an intuitive idea guided by our experience,
and in some cases statistics.
Probability can also be expressed in vague terms. For example, someone might say it
will probably rain tomorrow. This is subjective, but implies that the speaker believes
the probability is greater than 50%.
Subjective probabilities have been extensively studied, especially with regards to
gambling and securities markets. While this type of probability is important, it is not
the subject of this book. A good reference is "Degrees of Belief" By Steven Vick
(2002).
There are two standard approaches to conceptually interpreting probabilities. The
first is known as the long run (or the relative frequency approach) and the subjective
belief (or confidence approach). In the Frequency Theory of Probability, probability is
the limit of the relative frequency with which an event occurs in repeated trials (note
that trials must be independent).
Frequentists talk about probabilities only when dealing with experiments that are
random and well-defined. The probability of a random event denotes the relative
frequency of occurrence of an experiment's outcome, when repeating the
experiment. Frequentists consider probability to be the relative frequency "in the
long run" of outcomes.
Physical probabilities, which are also called objective or frequency probabilities, are
associated with random physical systems such as roulette wheels, rolling dice and
radioactive atoms. In such systems, a given type of event (such as the dice yielding
a six) tends to occur at a persistent rate, or 'relative frequency', in a long run of
trials. Physical probabilities either explain, or are invoked to explain, these stable
frequencies. Thus talk about physical probability makes sense only when dealing with
well defined random experiments. The two main kinds of theory of physical
probability are frequentist accounts (such as Venn) and propensity accounts.
Relative frequencies are always between 0% (the event essentially never happens)
and 100% (the event essentially always happens), so in this theory as well,
probabilities are between 0% and 100%. According to the Frequency Theory of
Probability, what it means to say that "the probability that A occurs is p%" is that if

you repeat the experiment over and over again, independently and under essentially
identical conditions, the percentage of the time that A occurs will converge to p. For
example, under the Frequency Theory, to say that the chance that a coin lands
heads is 50% means that if you toss the coin over and over again, independently,
the ratio of the number of times the coin lands heads to the total number of tosses
approaches a limiting value of 50% as the number of tosses grows. Because the ratio
of heads to tosses is always between 0% and 100%, when the probability exists it
must be between 0% and 100%.
In the Subjective Theory of Probability, probability measures the speaker's "degree
of belief" that the event will occur, on a scale of 0% (complete disbelief that the
event will happen) to 100% (certainty that the event will happen). According to the
Subjective Theory, what it means for me to say that "the probability that A occurs is
2/3" is that I believe that A will happen twice as strongly as I believe that A will not
happen. The Subjective Theory is particularly useful in assigning meaning to the
probability of events that in principle can occur only once. For example, how might
one assign meaning to a statement like "there is a 25% chance of an earthquake on
the San Andreas fault with magnitude 8 or larger before 2050?" (See Freedman and
Stark, 2003, for more discussion of theories of probability and their application to
earthquakes.) It is very hard to use either the Theory of Equally Likely Outcomes or
the Frequency Theory to make sense of the assertion.

QUANTITATIVE TECHNIQUES IN MANAGEMENT

Assignment B
Question 1: Write a note on decision making in management. How one will
take decision under risk and uncertainty.
Answer:

Decision-making is a crucial part of good business. The question then is how


is a good decision made?
One part of the answer is good information, and experience in interpreting
information. Consultation ie seeking the views and expertise of other people
also helps, as does the ability to admit one was wrong and change ones
mind. There are also aids to decision-making, various techniques which help
to make information clearer and better analysed, and to add numerical and
objective precision to decision-making (where appropriate) to reduce the
amount of subjectivity.
Managers can be trained to make better decisions. They also need a
supportive environment where they wont be unfairly criticised for making
wrong decisions (as we all do sometimes) and will receive proper support
from their colleague and superiors. A climate of criticism and fear stifles risktaking and creativity; managers will respond by playing it safe to minimise
the risk of criticism which diminishes the business effectiveness in
responding to market changes. It may also mean managers spend too much
time trying to pass the blame around rather than getting on with running the
business.
Decision-making increasingly happens at all levels of a business. The Board
of Directors may make the grand strategic decisions about investment and
direction of future growth, and managers may make the more tactical
decisions about how their own department may contribute most effectively to
the overall business objectives. But quite ordinary employees are
increasingly expected to make decisions about the conduct of their own
tasks, responses to customers and improvements to business practice. This
needs careful recruitment and selection, good training, and enlightened
management.
Types of Business Decisions
1. Programmed Decisions These are standard decisions which always
follow the same routine. As such, they can be written down into a series of
fixed steps which anyone can follow. They could even be written as computer
program

2. Non-Programmed Decisions. These are non-standard and non-routine.


Each decision is not quite the same as any previous decision.
3. Strategic Decisions. These affect the long-term direction of the business
eg whether to take over Company A or Company B
4. Tactical Decisions. These are medium-term decisions about how to
implement strategy eg what kind of marketing to have, or how many extra
staff to recruit
5. Operational Decisions. These are short-term decisions (also called
administrative decisions) about how to implement the tactics eg which firm
to use to make deliveries.
Figure 1: Levels of Decision-Making

Figure 2: The Decision-Making Process

The model in Figure 2 above is a normative model, because it illustrates


how a good decision ought to be made. Business Studies also uses positive
models which simply aim to illustrate how decisions are, in fact, made in
businesses without commenting on whether they are good or bad.
Linear programming models help to explore maximising or minimising
constraints eg one can program a computer with information that establishes
parameters for minimising costs subject to certain situations and information
about those situations.
Spread-sheets are widely used for what if simulations. A very large
spread-sheet can be used to hold all the known information about, say,
pricing and the effects of pricing on profits. The different pricing assumptions
can be fed into the spread-sheet modelling different pricing strategies. This
is a lot quicker and an awful lot cheaper than actually changing prices to see
what happens. On the other hand, a spread-sheet is only as good as the
information put into it and no spread-sheet can fully reflect the real world.
But it is very useful management information to know what might happen to
profits what if a skimming strategy, or a penetration strategy were used for
pricing.
The computer does not take decisions; managers do. But it helps managers
to have quick and reliable quantitative information about the business as it is
and the business as it might be in different sets of circumstances. There is,
however, a lot of research into expert systems which aim to replicate the
way real people (doctors, lawyers, managers, and the like) take decisions.
The aim is that computers can, one day, take decisions, or at least
programmed decisions (see above). For example, an expedition could carry
an expert medical system on a lap-top to deal with any medical emergencies
even though the nearest doctor is thousands of miles away. Already it is

possible, in the US, to put a credit card into a hole-in-the-wall machine and
get basic legal advice about basic and standard legal problems.
Constraints on Decision-Making
Internal Constraints
These are constraints that come from within the business itself.
- Availability of finance. Certain decisions will be rejected because they
cost too much
- Existing Business Policy. It is not always practical to re-write business
policy to accommodate one decision
- Peoples abilities and feelings. A decision cannot be taken if it assumes
higher skills than employees actually have, or if the decision is so unpopular
no-one will work properly on it.
External Constraints
These come from the business environment outside the business.
- National & EU legislation
- Competitors behaviour, and their likely response to decisions your
business makes
- Lack of technology
- Economic climate
Quality of Decision-Making
Some managers and businesses make better decisions than others. Good
decision-making comes from:1. Training of managers in decision-making skills. See Developing
Managers
2. Good information in the first place.
3. Management skills in analysing information and handling its
shortcomings.
4. Experience and natural ability in decision-making.

5. Risk and attitudes to risk.


6. Human factors. People are people. Emotional responses come
before rational responses, and it is very difficult to get people to make
rational decisions about things they feel very strongly about. Rivalries
and vested interests also come into it. People simply take different
views on the same facts, and people also simply make mistakes.
Question 2: The Mumbai Cricket Club, a professional club for the cricketers, has the player who
led the league in batting average for many years. Over the past ten years, Amod Kambali has
achieved a mean batting average of 54.50 runs with a standard deviation of 5.5 runs. This year
Amod played 25 matches and achieved an average of 48.80 runs only. Amod is negotiating his
contract with the club for the next year, and the salary he will be able to obtain is highly
dependent upon his ability to convince the teams owner that his batting average this year was
not significantly worse than in the previous years. The selection committee of the club is willing to
use a 0.01 significance level.
You are required to find out whether Amods salary will be cut next year.
Answer:

Null Hyopothesis -Ho: Amods batting average this year (48.80)


is not significantly different from his all-time batting average of
54.50
Alternative Hypothesis -Ha: Amods batting average this year
(48.80) is significantly lower than his all-time batting average of
54.50
= 0.01

t=

48.80 - 54.50
5.5 / 25

= -5.1818

The critical value of t is -2.492 at df = 24


Conclusion: Reject Ho and accept Ha (Amods batting average
this year is significantly lower than his all-time batting average.
Amods salary will most likely be cut next year.

3 The salaries paid to the managers of a company had a mean of Rs. 20,000 with a standard
deviation of Rs 3,000, What will be the mean and standard deviation if all the salaries are
increased by
1) 10%
2) 10% of existing mean
3) Which policy would you recommend if the management does not want to have increased
disparities of wages?
Answer

1) 10%
Both the mean and standard deviation will simply increase by 10% to
Rs 22,000 and Rs 3,300, respectively.
2) 10% of existing mean
Only the mean will increase by 10% to Rs 22,000 and the standard
deviation will remain the same at Rs 3,000.
3) Which policy would you recommend if the management does not
want to have increased disparities of wages?
Increasing the salaries by 10% of existing mean does not increase
disparities of wages, therefore, is recommended.

Case study
Please read the case study given below and answer questions given at the end.
Kushal Arora, a second year MBA student, is doing a study of companies going public for the first
time. He is curious to see whether or not there is a significant relationship between the sizes of
the offering (in crores of rupees) and the price per share after the issue. The data are given
below:
Size (in
108
39
68.40
51
10.40
4.40
crore of
rupees)
12
13
19
12
6.50
4
Price ( in
rupees)
Question
You are required to calculate the coefficient of correlation for the above data set and comment
what conclusion Kushal should draw from the sample.
Answer:
N
1
2
3
4
5
6
TOTALS

r=

X
12
13
19
12
6.5
4
66.5

Y
108
39
68.4
51
10.4
4.4
281.2

XY
1296
507
1299.6
612
67.6
17.6
3799.8

X
144
169
361
144
42.25
16
876.25

Y
11664
1521
4678.56
2601
108.16
19.36
20592.08

6(3799.8) - (66.5)(281.2)

[6(876.25) - (66.5) ][6(20592.08) - (281.2)


2

= 0.67

Conclusion: There is a positive correlation for the above set of data

Assignment C
(Objective Questions)

Answer all questions.


Tick Mark () the most appropriate answer.
1. Which of the following is not correct about construction of bar charts?
a. All bars should rise from the same base line

b. Width of the bar should be proportional to the data represented


c. The bars should be arranged from the left to right
d. Length of the bars should be proportional to the data represented
2. Which of the following is not true about mean absolute deviation :-

a. Mean deviation is obtained by calculating the absolute deviations of


each observation from mean
b. Mean deviation is a more comprehensive measure compared to range
c. It is conducive to further algebraic treatment
d. It cannot be computed for distributions with open end classes.
3. The value index number measures the-a. Changes in prices of a basket of commodities from one period to another
b. Changes in quantities consumed of a basket of commodities over a period of time

c. Change in the total monetary value of a basket of commodities over a


period of time
d. Change in the retail prices of various commodities
4. A market researcher wants to find out the buying behavior of the typical household during the
weekends. He divides the city into various localities such that each locality represents the city
population in terms of age group, gender and social status. Then he randomly selects five
localities and surveys each household. Which of the following sampling techniques best
describes the method used by the researcher:-

a. Cluster sampling
b. Systematic sampling
c. Stratified sampling
d. Convenience sampling.
5. If every item in the data set is increased by the same quantity then the standard deviation of
the data set--

(a) Remains the same


(b) Increases by the same quantity by which every data item is increased
(c) Decreases by the same quantity by which every data item is increased
(d) Increases by the square root of the same quantity by which every data item is
increased
6. Which of the following is true with regard to a linear equation Y a bX = 0, where X is the
independent variable and Y is the dependent variable:(a) The slope of the straight line is a
(b) The Y-intercept of the straight line is 0
(c) The Y-intercept of the straight line is b

(d) The slope and the Y-intercept remain constant for all combinations of X
and Y values which satisfy the equation
7. Which of the following quantitative method is not used by managers to take decision:-

(a) Linear programming

b) Time series
c) Regression analysis
d) Hypothesis testing
8. In the graphical method of solving linear programming problems the feasible region is the set of
all points-a) Which does not satisfy any of the constraints?
b) Which satisfy exactly one of the constraints?

c) Which satisfy all the constraints?


d) At which the objective function has the same value?
9. Which of the following is false in regard to histogram :a) The class intervals are represented by the base of the rectangles
b) The frequencies are represented by the heights of the rectangle
c) If the class intervals are of equal width then the bases of the rectangles will be equal in
length

d) The tallest rectangle in a histogram represents the class interval with the
lowest frequency
10. Which of the following measures in not affected by the presence of extreme values in a
dataset :a) Range
b) Arithmetic mean
c) Standard deviation

d) Median
11.

1\2x + 1/3y 1/3z = -1


1/3x 1/2y -1/6z = 4
1/6x 5/6y + 1/2z = 3
The values of x in the above simultaneous equations would be-a) 3

b) 6
c) 9
d) 12
2

12. The following details are available with regard to a data set: Sx = 33, Sx = 199,
n = 6. If each observation in the data set is multiplied by 2 then the standard deviation of the
resulting values will be equal to:

a) (35/3)1/2
b) 35/3
c) 3
d) 25
13. The following data pertains to three commodities:Commodity
Rice
Wheat
Pulses

Price in 2004 (Rs. /kg)


11.50
13.50
26

Price in 1994 (Rs. /kg)


9.50
8.50
20

The base year is 1994. The unweighted aggregates price index for the year 2004 is
approximately--

14. If the regression equation is the perfect estimator of the dependent variable then which of the
following is false?
a) The standard error of estimate is zero

b) The coefficient of correlation is zero


c) The coefficient of determination is 1.00
d) All the data points fall on the regression line
15. If the regression equation is a perfect estimator of the dependent variable then which of the
following is false :a) The standard error of estimate is zero

b) The coefficient of correlation is zero


c) The coefficient of determination is 1.00
d) All the data points fall on the regression line
16. Which of the following represents the proportion of variation in the dependent variable that is
explained by the regression line :-

a) Coefficient of determination
b) Coefficient of correlation
c) Coefficient of variation
d) Standard error of estimate
17. If the coefficient of correlation between the two variables lies between -1 and 0, then the
covariance between them is-a) Positive

b) Negative
c) Zero
d) Equal in magnitude to the variances of both the variables
18. If bYX is the slope of coefficient of regression line of Y on X, and bXY is the slope coefficient of
regression line of X on Y then which of the following is true :a) bYX is positive implies that bXY is positive

b) bYX is positive implies that bXY is negative


c) bYX and bXY are reciprocals
d) The product of bYX and bXY is zero
19. A graphical method of representing states of nature and courses of action involved in decision
making is referred to as--

a) Decision tree
b) Histogram
c) Scatter diagram
d) Frequency distribution
20. If the probability of occurrence of one event is not affected by the occurrence of another event
and vice versa then the two events are said to be-a) Collectively exhaustive

b) Independent
c) Dependent
d) Mutually exclusive

21. Bayes theorem helps the statistician to calculate--a) Dispersion


b) Subjective probability

c) Posterior probability
d) Classical probability
22. In a binomial distribution the probability of getting zero or more numbers of successes is
equal to-a) 0

b) 1
c) The probability of getting zero success
d) The probability of getting successes in all trials
23. Which of the following measures represent the scatter of the values in a data set :a) Arithmetic mean
b) Geometric mean

c) Standard deviation
d) Median
24. As the sample size increases-a) The variation of the sample mean from the population mean becomes larger

b) The variation of the sample mean from the population mean becomes
smaller
c) The variance of the sample becomes less than the variance of the population
d) The standard deviation of the sample becomes more than the standard deviation of
the population.
25. In the graphical method of solving linear programming problems if there is a unique optimal
solution, then the optimal solution-a) Is always found at the center of the feasible region
b) Is always at the origin
c) Lies outside the feasible region

d) Is located at one of the corner points of the feasible region


26. A multiple regression equation has-a) Multiple dependent variables
b) One independent variable

c) One dependent variable


d) A standard error of estimate equal to zero

27. Which of the following conditions indicate the existence of multiple optimal solutions when a
linear programming problem is solved by the graphical method :a) One of the constraints is parallel to the horizontal axis
b) The objective function is parallel to the vertical axis

c) The objective function is parallel to one of the edges of the feasible


region which is in the direction of optimal movement of the objective
function
d) If two or more constraints are parallel to each other
28.
Three persons enter into a railway carriage and there are 8 seats available. In how many
ways they can seat themselves?
a) 24

b) 336
c) 40
d) 56

29.

In which of the following the simple harmonics mean is appropriate:


a) A set of ratios using the numerators of the ratio data as weights
b) A set of ratios using the denominators of the ratio data as weights

c) A set of ratios which have been calculated with the same numerators
d) A set of ratios which have been calculated with the same denominators

30. Which of the following statements is not true about standard deviation?
a) Combined standard deviation of two or more groups can be calculated

b) The sum of the squares of the deviations of items of any series from a
value other than the arithmetic mean would always be smaller
c) Standard deviation is independent of any change of origin
d) Standard deviation is dependent on the change of scale
31. Which of the following is/are true with respect to geometric mean :(a) Geometric mean cannot be calculated if any of the value in the set is zero.
(b) Geometric mean is appropriate for averaging the ratios of change, for average of
proportions, etc.
(c) Geometric mean is considered most suitable average for index numbers.

Only (I) above


(i) Only (II) above
(ii) All (I), (II) and (III) above
(iii) Only (II) above
32. The probability of getting two heads from three tosses of a fair coin is-a) 1/8
b) 1/4

c) 3/8
d) 1/2
33. If A and B are two mutually exclusive events and P(A) = 2/3, then the probability of events A
and B happening together is--

a) 0
b) 1/3
c) 2/3
d) 1/2
34. Which of the following can be directly used as the test statistic in hypothesis tests on the
basis of non standardized scale :(a) The sample mean, when the test involves the population mean.
(b) The difference between two sample means, when the tests involve the difference
between two population means.
(c) The sample proportion when the test is about the population proportion
(i) Only (a) above
(ii) Only (b) above
(iii) Only (c) above

(iv) All (a), (b), (c) above


35. A box contains 60 ball point pens out of which 10 pens are defective. 8 pens are randomly
picked up from the box. The probability distribution of the number of pens which are randomly
picked, will be-a) A discrete uniform distribution

b) A binomial distribution
c) A hyper geometric distribution
d) A Chi- square distribution
36. If we consider the process of selecting a simple random sample as an experiment then which
of the following can be treated as random variable(s)?
(a) Sample mean
(b) Sample standard deviation
(c) Sample range
(d) Sample median
(i) Only (a) above
(ii) Only (b) above

(iii) All (a), (b), (c), (d) above


(iv) Only (d) above
37. The covariance of random variable with itself is always--

a) A positive quantity
b) A negative quantity
c) 0
d) Less than its expected value
38. A man has 6 friends. In how many ways he can invite one or more of them to a party?

a) 63
b) 64
c) 119
d) 120
39. Find x; if logx/log2 = log36/log4
a) 0
b) 2
c) 4

d) 6
40. The empirical relationship between range (R) and mean deviation (M.D) is-a) 2R=15M.D

b) 3R=17M.D
c) R=17M.D
d) 3R=M.D

Anda mungkin juga menyukai