Anda di halaman 1dari 89

1

ISOM 201 - DATA AND DECISION ANALYSIS



CLASS NOTES AND SUPPLEMENT TO TEXT

Prof. Stephen McDonald

2011

2
Table of Contents


Introduction to the Course 3
Chapter 1 - Introduction to Quantitative Analysis 4
Chapter 2 - Probability and Statistics 8
Chapter 3 - Decision Analysis 18
Chapter 4 - Linear Regression 29
Chapter 5 - Forecasting 38
Chapter 7 - Linear Programming 53
Chapter 8 - Linear Programming Applications 67
Chapter 15 - Simulation 69
Appendix A - Activating the Excel Add-ins 88

3
Introduction to the course

ISOM201 falls squarely in the middle of a critical sequence of courses which every Sawyer
Business School student must successfully complete. The sequence is:

Math 130 (Math 134 for accounting or finance majors)
ISOM120
Stats250 (Stats 240 is also acceptable)
ISOM 201
ISOM 319
SIB429 (formerly MGT429)

The sequence is intended to build a strong quantitative, analytical and strategic skill set in all our
students.

ISOM201 may be one of the first courses in your program of study that requires recall of prior
course material and synthesis of that material into business problem solving. The three critical
areas of prior coursework are:
1. Fundamental math and algebra skills
2. Proficiency with Microsoft Excel
3. The ability to recall and apply major concepts from statistics
Students who have traditionally had trouble in these areas or who may have earned good
grades in these courses without actually retaining much of the material presented will struggle
in this course. It is important to understand that while this course uses these important skills, it
cannot re-teach them. The emphasis of ISOM201 is to apply these skills to business problems.

Students with a strong foundation in the prior coursework will be able to learn how to assess
business problems in a logical and orderly fashion, develop quantitative models to examine
alternative solutions and begin to think more critically about the different alternatives business
decision makers face.

Doing well in this course may require a greater time commitment than you have become
accustomed to in your previous studies. There are very few shortcuts to learning this material.
For most students, success in this course is a direct reflection of the time that they are willing to
commit to the course and their persistence in overcoming their mistakes and frustrations.
Mastering the material demands repetition of many different problems to build comfort, speed
and accuracy.

Using the Class Notes

The following class notes are a supplement to the required text, not a substitute. They are a
dynamic work in progress and change every semester. Students should use these in conjunction
with the text, classroom lectures and homework problems to create their own individual
approach to mastering the material.


4
Chapter 1 Introduction to Quantitative Analysis

Chapter 1 provides a broad overview of Quantitative Analysis. The concepts covered in this
chapter will be used throughout the course. All of the specific applications of quantitative
analysis we study this semester will involve:

Application of the quantitative analysis approach to problem solving
Model building, solution and evaluation
Computer solutions
Business use of the models and techniques
Understanding of the specific limitations, strengths and weaknesses of each modeling
technique

One of the more important concepts we will apply throughout this course is the quantitative
analysis approach, described in section 1.3 of your text. Weve all seen different versions of this
logical, systematic and consistent process as far back as the very first science courses we ever
took. The principles here are exactly the same as the principles we learned then.

Developing your ability to recognize and solve business problems in a
structured, logical manner will increase your value in any field you
choose.

Prior to applying the steps outlined in this section, the analysis of any business problem begins
with Observation. This requires much more than casual looking. It demands a level of
concentration and scrutiny that is the product of a trained mind. Observation of a business
system, process or problem is critical to the identification and anticipation of problems.

Chapter 1 is a very light warm-up and review of simple quantitative analysis concepts and
techniques. This material should not be very challenging. If you find that you are having trouble
understanding the concepts of break-even, contribution margin or sensitivity analysis, you might
want to go back to earlier coursework and refresh your memory. I know youve done this
before!

If the algebra is troubling or your Excel skills are weak, now is the time to
get to work. Do MORE problems than I have assigned and solve every
problem with algebra AND Excel. There is no magic to these skills. They
are acquired and improved through repetition. You may also want to
consider contacting the Ballotti Learning Center now to sign up for study
groups or tutoring services.


5
1.3 The quantitative analysis approach

1) Defining the Problem. This step follows from our observation of the business situation.
The definition of a problem should tell managers what objectives of the firm are not
being achieved. Problem Definition should not extend into speculation regarding the
causes of the problem or potential solutions.

2) Developing a Model. Model Building is the technique that we will focus on in this
course. Models are used to represent existing problems or situations. A well-
constructed model will permit us to see the relationships that exist among the variables
in the problem.

3) Acquiring Input Data. The accuracy and quality of the data we collect is critical to the
success of the model. The techniques learned in statistics will serve us well in collecting
fair, random, representative and objective data.

4) Developing a solution. Once the appropriate model is constructed, the model must be
manipulated in order to find the optimal or best possible solution to the problem. Often
this will involve some trial and error, or other techniques which MS Excel can greatly
simplify.

5) Testing the solution. Finding a single solution is not the end, but a new beginning.
After a working model has been solved, it needs to be tested to make sure that the
goals of the model are consistently applied across a broad array of input data and
results. The process of analyzing the results for different input data requires both a
thorough knowledge of the operating mechanics of the model and a strong level of
common sense to assure that the model is correctly constructed.

6) Analyzing the results. The solution to the model suggests certain management actions
and the implication of those actions must be fully understood before implementing the
solution. Additionally, since the solution is often based on a single set of data, the
behavior of the model and the actions suggested for management must be evaluated
for differing input data. The process of analyzing the results as the input data is
changed, or a business conditions changes in called Sensitivity Analysis.

7) Implementing the results. The final step, the only real proving ground is
implementation. This can be much more difficult than an organization might imagine,
as it brings in all the subjective, organizational and emotional issues that mangers must
face when implementing change in an organization. The ability to skillfully develop and
implement positive changes in organizations requires excellent quantitative and people
skills and is a critical skill that separates top quality managers and executives from the
middle of the pack.



6
1.4 Model Development

Many of the mathematical models we construct will be solved in three ways:
1. Graphically (using handwritten visual estimates or MS Excel)
2. Algebraically (using formulas provided in the text)
3. With MS Excel
a. Building our own spreadsheets
b. Using MS Excel Add-ins such as Data Analysis Toolpak and Solver
c. Using MS Excel Graphs and their advanced options

Our models will be generally categorized by whether they involve risk or chance. Models that
do not contain an element of risk are known as deterministic models. These rely on the
assumption that the values in the model are known with certainty. If risk is present, we have a
probabilistic model and we will generally use probability to represent the likelihood that
different events may occur in our model.


Cost/Volume/Profit and Break-Even Analysis

The first model presented in the text is Break-Even Analysis. I consider this the Fundamental
Equation of Business because it forces us to focus on profit and the three basic variables that
cause a firm to earn profit or incur losses.

Profit = Revenue Cost

Now lets add some algebra to make the model a little more useful:

Revenue (R) is defined as the selling price per unit (s) multiplied by the number of units sold (X).
R = sX

Cost (C) is comprised of fixed costs (f) and variable costs (v). Fixed costs remain constant and
are independent of the volume of sales or production. Variable costs depend on the number of
items produced (X). C = f + vX

Profit (P) is determined by: P = sX - (f + vX). Simplifying with a little more algebra, we derive:

P = sX - (f + vX) P = sX - f - vX P = X(s - v) - f

This formula identifies two key concepts:

1. Contribution Margin per unit = (s v)

2. Break Even volume (Solve for X with P = 0) :



7
Break even Example - Problem 1-15

Given: Variable cost is $20 per unit.
Selling price is $50 per unit.
Fixed cost is $150.

Find: Number of units required to break even

Long method:







Shorter:



5 units must b e sold to break even. At that sales level, the income
statement is:

Sales $250

Variable costs 100
Fixed costs 150
Total costs 250

Net Income $ 0

( )
( )
5
150 30
150 30 0
150 20 50 0
150 20 50 0
=
=
=
=
+ =
+ =
X
X
X
X X
X X
f vX sX P
8
Chapter 2 Probability Concepts and Applications

Disclaimer: It is impossible to overemphasize how incomplete this review is, how
important a firm command of statistics is for this course and for your future
career and how strongly I believe you should find a good resource to make sure
your retention of this material is fairly complete. In addition, certain liberties have
been taken with proper statistical procedures and the precise interpretation of
statistical tests in order to illustrate the application of the core concepts of the
tests to this course. While these liberties may be acceptable in general
management practice, they do not constitute a complete or completely correct
presentation of the concepts.

What follows is the bare minimum necessary for survival in ISOM201 and nothing
more.

Chapter 2 in our text presents a good review of many of the key concepts from your statistics
course. If other aspects of your statistics course need some refreshing, I recommend the
following book:
Schaums Easy Outlines
Business Statistics
Leonard J. Kazmier, Ph. D.
ISBN 0-07-139876-7

I generally carry this around with me and have found it to be a quick and effective guide to the
essential statistical concepts that all business professionals should know.

Topics:
- Basic Probability Rules
- Expected value of a Discrete Random Variable
- Conditional, Total and Bayesian Probabilities
- The Normal Distribution
- Confidence Interval Estimation
- Hypothesis Testing (including ANOVA)

I have found it to be an effective guide to the essential statistical concepts that all business
professionals should know but cant always remember from prior coursework.


9
Basic Probability Rules for your review:

All probabilities values must be between 0.00 and 1.00, also expressed as 0% to 100%.

The sum of all the probabilities for all possible outcomes of an experiment equal 1.00.

Two events are mutually exclusive if only one of the events can occur on any one trial.

An outcome set is defined as collectively exhaustive if the list of outcomes includes every
possible outcome.

The Rule of Complements - The probability of event A happening is equal to 1 minus the
probability of event A not happening.
) ( 1 ) ( A P A P =


The Union rule The probability that event A or event B or both event A and event B will
happen is equal to the probability that event A will happen, plus the probability that event B will
happen, minus the probability that both event A and event B will happen.

) ( ) ( ) ( ) ( ) ( B and A P B P A P B or A P B A P + = =


The Intersection Rule for Independent Events - If events A and B are independent, then the
probability of both event A and Event B happening is equal to the product of the probability of
event A and event B.

) ( * ) ( ) ( ) ( ) ( B P A P B and A P AB P B A P = = =




10
Expected Value of a Discrete Random Variable

A probability distribution is a set of possible events (X
i
) that comprises all possible outcomes. The sum
of the probabilities in a probability distribution must be 1.00 and the expected value of the distribution
is equal to:
] ) ( * [ ) (
1

=
= =
n
i
i i
X P X X E

Example: Demand for white latex paint at Diversity Paint and Supply has always been between 0 and 4
gallons per day, with the probability of each outcome as follows:

Gallons Probability
0 0.20
1 0.40
2 0.25
3 0.10
4 0.05





This calculation can easily be done in Excel using the =SUMPRODUCT function:







( ) | |
4 . 1 ) (
2 . 3 . 5 . 4 . 0 ) (
05 . * 4 1 . * 3 25 . * 2 4 . * 1 2 . * 0 ) (
) ( * ) ( * ) ( * ) ( * ) ( * ) (
* ) (
5 5 4 4 3 3 2 2 1 1
1
=
+ + + + =
+ + + + =
+ + + + =
=

=
X E
X E
X E
X P X X P X X P X X P X X P X X E
X P X X E
n
i
i i
11
Conditional Probability

Conditional probabilities are important to decision trees because these decision making
situations often have a sequence of probabilistic events in which the posterior (later)
probabilities change depending on the prior (earlier) outcomes.
Conditional Probability Rule -

) ( * ) | ( ) (
) (
) (
) | ( B P B A P AB P or
B P
AB P
B A P = =

The first formula above should be read as, The probability that event A will happen, given that
event B has happened, is equal to the probability that both events A and B will happen divided
by the probability that even B will happen.

Example: A consulting firm is bidding for two projects. The company executives estimate that the
probability of winning the bid for project A is 0.45. The executives also believe that if the company wins
the bid for project A, then there is a 0.90 probability that they will also win the bid for project B. What is
the probability the company will win both projects?

Given: Probability of A = P(A)=0.45
Probability of B, given A = P(B|A)=0.90

Equation: P(AB) = P(B|A)*P(A)
P(AB) = 0.90 * 0.45
P(AB) = 0.405

Answer: There is a 0.405 chance the company will win both projects.



12
The Law of Total Probability

This law is an extension of the conditional probability rule and is used to calculate the total probability of
an event when only conditional probabilities are known:

) ( * ) | ( ) ( * ) | ( ) ( B P B A P B P B A P A P + =


This should be read as, The probability of event A is equal to the Probability of A given B, times the
probability of B, plus the probability of A given not B, times the probability of not B.

Example:

A company is considering the opportunity to conduct product test marketing before deciding to
introduce a new product. Based on historical results from similar product introductions, the company
estimates that there is a 60% probability that the product will be successful. An analysis of prior test
marketing result had also indicated that 80% of all successful products had positive test marketing
results and that 50% of all unsuccessful products also had positive test market results. Using this
information, the company wants to know the probability that the test marketing results will be positive.

Probabilities:
Test marketing results = Event A
Product introduction success = Event B


We want to know the probability that event A will happened and all we are given are the following facts:




Using these probabilities, we can calculate the probability of event A happening as follows:

68 . ) (
20 . 48 . ) (
4 . * 5 . 6 . * 8 . ) (
) ( * ) | ( ) ( * ) | ( ) (
=
+ =
+ =
+ =
A P
A P
A P
B P B A P B P B A P A P

Note that the probability of event A differs depending on the outcome of event B.
This tells us that these events are conditionally dependent.
5 . ) | (
8 . ) | (
4 . ) ( 6 . ) (
=
=
= =
B A P
B A P
B P B P
13
Bayes Theorem - The basic principle of Bayesian analysis is that additional information, if available,
can sometimes help a decision maker improve the marginal probabilities of the occurrence of an event.
The altered probabilities are referred to as revised probabilities.

In general, given two events, A and B that are conditionally dependent, Bayes rule can be written as:
) ( ) | ( ) ( ) | (
) ( ) | (
) | (
A P A B P A P A B P
A P A B P
B A P
- + -
-
=


Bayes Theorem allows us to flip the conditional probabilities and solve for
P(A|B) when we are given its opposite, P(B|A).

Lets consider two possible events, A and B. Event A represents the probability that market demand for
a product will be high. Without any market research, this is estimates to be 60%. Event B represents
the probability that market research will be predict high demand. The market research firm has
provided us with their historical track record of market predictions and it shows that they predict high
demand 90% of the time, given that demand later turned out to actually be high. What this means is
that they are right 90% of the time when demand is high. Their track record also shows that they
predict high demand 20% of the time, given that demand later turned out to be low. This means that
they incorrectly issue high demand predictions 20% of the time. Notice that these are not
complements. They represent pieces of two different conditional probabilities.

Heres what we were given:
20 . ) | (
90 . ) | (
40 . ) ( 60 . ) (
=
=
= =
A B P
A B P
A P A P


We would like to know the probability of having high demand for the product, given that we conducted
market research and it predicted high demand. Common sense would tell us that the probability of high
demand should be better for a product that already has a reliable market research prediction of high
demand than it would be for a product with no market research. We use Bayes theorem to produce
the following:


871 .
62 .
54 .
08 . 54 .
54 .
4 . * 2 . 6 . * 9 .
6 . * 9 .
) | (
) ( ) | ( ) ( ) | (
) ( ) | (
) | (
= =
+
=
+
=
- + -
-
=
B A P
A P A B P A P A B P
A P A B P
B A P


There is an 87.1% chance that demand will be high, given that the market research predicted that
demand would be high. This is substantially better than the base, 60% probability of high demand.
14
Contingency Tables

Contingency tables are very useful tools for solving Bayesian probability problems if you understand
how the work.

Lets look at the previous example using contingency tables.


Heres what we were given, including the complements:
20 . ) | ( 80 . ) | (
10 . ) | ( 90 . ) | (
40 . ) ( 60 . ) (
= =
= =
= =
A B P A B P
A B P A B P
A P A P


From this, the Law of Conditional Probability permits the following calculations:

32 . 40 . * 80 . ) ( * ) | ( ) (
08 . 40 . * 20 . ) ( * ) | ( ) (
06 . 60 . * 10 . ) ( * ) | ( ) (
54 . 60 . * 90 . ) ( * ) | ( ) (
= = =
= = =
= = =
= = =
A P A B P AB P
A P A B P B A P
A P A B P B A P
A P A B P AB P


The contingency table would look like this:



From the table the following calculations can be made:

The total probability of B = P(B) = 0.54 + 0.08 = 0.62

Posterior (revised) probabilities:


Total
Total 1.00
A
A
B
B
) ( AB P
) ( B A P
) ( AB P ) ( B A P
) ( A P
) ( A P
) (B P
) (B P
Total
0.54 0.08 0.62
0.06 0.32 0.38
Total
0.60 0.40 1.00
A
A
B
B
842 . 0
38 . 0
32 . 0
) | ( 158 . 0
38 . 0
06 . 0
) | (
129 . 0
62 . 0
08 . 0
) | ( 871 . 0
62 . 0
54 . 0
) | (
= = = =
= = = =
B A P B A P
B A P B A P
15
The Normal Distribution
Many data sets tend to follow a normal, or relatively normal distribution. The classic normal
distribution is symmetrical (the mean = the median) and bell-shaped. The empirical rule can be
applied to this distribution and tells us about the probability of a data point being close to or far
from the mean. You should remember this picture:


The Empirical Rule:

- 68% of the population lies +/- 1 standard deviation from the mean.
- 95% of the population lies +/- 2 standard deviations from the mean.
- 99.7% of the population lies +/- 3 standard deviations from the mean.
Example 1: 100 students take a midterm exam. The results are normally distributed with a
mean of 75 and a standard deviation of 9. What range of exam scores should include 95% of
the students scores?

- It can be inferred that 95% of the students scored between 57 and 93.
o 57 = 75 - 2*9
o 93 = 75 + 2*9
Example 2: What is the probability that a student scored between 65 and 85 on the exam?

Table 2.9, located on page 48 of the text, contains the areas under the curve for the standard
normal distribution. This table can be used to calculate the probabilities associated with the
following example:




6875 . 15625 . 84375 . ) 85 65 (
15625 . 84375 . 1 ) 11 . 1 ( 11 . 1
84375 . ) 11 . 1 ( 11 . 1
11 . 1
9
10
9
75 85
11 . 1
9
10
9
75 65
= = < <
= = < =
= < =
= =

= =

=
X P
X P Z
X P Z
Z and
X
Z
o

16
Confidence Interval Estimation

Continuing with the data above, lets assume a random selection of 10 exams was selected to
get a rough feel for how students performed. The sample mean was 78 and the sample
standard deviation was 11.

Note that the sample mean and standard deviation do not match the population
mean and standard deviation. Sample statistics are only estimators of the true
population parameters. Oftentimes, we do not know the true population
parameters and must use sample statistics to infer the values of these
parameters.

How should we present our estimate of the true mean exam score based solely on the sample
statistics? We could use the sample mean and conclude that the population mean will be
exactly 78. Does this make sense?

It shouldnt make sense because we are making a sample statistic sound much too precise
when we should know it is only an estimator of the population mean.

A confidence interval is a much better way of presenting our estimate of the population mean. A
simple 95% confidence interval can be constructed in two steps:

1. Calculate the standard error of the estimate:

by dividing the standard deviation by


the square root of the sample size. In this case

.
2. Build the confidence interval +/- 2 standard errors from the sample mean.

( ) , -

It is reasonable to state that based on the sample data we are 95% confident the
population means score will be between 71 and 85.

Does this range seem too wide to be useful? We often run into this problem when
working with small sample sizes. Lets see what happens to the range when we have a
larger sample size, but the same sample mean and standard deviation. With a sample
size of 50 instead of 10, we calculate the following standard error and confidence
interval:



( ) , -

Confidence intervals are important to our study of regression, as they permit us to make
sensible and logical statements about the potential value of the dependent variable as projected
by the regression equation.
17
Hypothesis Testing - (including ANOVA)

Lets see if we can simplify the often confusing topic of hypothesis testing. Heres the basic
process for all hypothesis tests:
- A statement is made that needs to be tested. This is generally the alternative
hypothesis (H1) because an overwhelming amount of evidence is required to support a
claim.
- The opposite, or all other conditions, then becomes the null hypothesis (H0). This
hypothesis will either be rejected, or fail to be rejected based on the sample evidence.
- We set the guidelines for the test:
o The significance level
5% = 95% confidence (most commonly used)
1% = 99% confidence (stricter test, harder to reject H0)
10% = 90% confidence (looser test, easier to reject H0)
o The distribution to be used for the test statistic
z = normal distribution
t = students distribution
Also F, 2 and others
- Conduct the test
- Evaluate results

A simplified approach:

- For our purposes, always assume a 5% level of significance.
o We will not reject the null hypothesis unless there is less than a 5% probability
that it is true. Remember, the null hypothesis gets the benefit of the doubt. It
takes a lot of evidence to reject the null hypothesis.
- In lieu of evaluating the test statistic, which can be both confusing and tedious, we
should always convert the test statistic to its p-value.
o Fortunately, this is done for us by Excel in all the areas in which we need to
evaluate hypothesis test.
- Always (in this class) evaluate the results as follows:
o If p-value < 0.05, reject the null hypothesis, accept the alternative as true.
o If p-value > 0.05, do not reject the null hypothesis, there is not enough evidence
to accept the alternative as true.




18
Chapter 3 Decision Analysis

Decision Analysis

Our objective in studying decision making is to bring a strong element of analysis and objectivity
to the decision making process. With a basic understanding of probability, an organized
approach to sorting through the issues a decision maker faces and a commitment to logical
analysis, we can construct models of decision making alternatives and use these models to
better analyze the potential outcomes and consequences of our decisions.

Components of Decision Making

The two basic elements of decision making are the decision alternatives, the choices we can
make, and the events (states of nature), that may occur in the future. It is important to
distinguish carefully between these two.

We choose what to do when we are faced with a decision.

Probability governs the occurrence of an event.


Six steps of Decision Making:
1. Define the problem Whats going on?
2. List the alternatives What choices (decisions) do I have?
3. Identify possible outcomes or events What might happen?
4. List the payoff for each end point Monetary Outcomes
5. Select an appropriate model The right tool for the job
6. Apply the model and analyze the results

We evaluate our decision alternatives, the monetary outcomes of our decisions and the possible
events by constructing a payoff table. A payoff table helps us organize all the different
outcomes possible and consider their relative desirability and/or probability.

19
Lets consider the following payoff table for a situation with 3 decision alternatives and four
possible events:

A manufacturer is trying to decide on the appropriate production level of its product for the
next month. It has three levels of production available and is trying to match production to
unknown market demand for the product. Historically, the company has been able to classify
demand into 4 categories and has forecast the following financial outcomes for each
combination of production level and market demand.





How should we decide what to do?

First, we must understand that our decision is the production level and that the market demand
is an event that we do not control.

Next, we must assess the amount of information available to us about the likelihood of each
even happening. This assessment leads us into one of three decision-making environments:
certainty, uncertainty and risk.

- Certainty the decision makers know which event will occur.

- Uncertainty the decision maker does not know what will happen and does not know
enough to reliably assess the probability of occurrence of each event.

- Risk the decision maker does not know what will happen, but does know enough to
reliably assess the probability of occurrence of each event.



Event:
Low
Demand
Moderate
Demand
Good
Demand
High
Demand
Decisions:
Minimum Production 7,000 $ 8,000 $ 10,000 $ 10,000 $
Average Production (10,000) $ - $ 10,000 $ 30,000 $
Maximum Production (25,000) $ (10,000) $ 15,000 $ 50,000 $

4 Events (States of Nature)
3 Decision
Alternative
12
20
Decision Making under Certainty

When the decision maker in this example know which event will occur, it is easy to choose the
alternative that produces the best possible outcome.

- If Low Demand is known, then choose Minimum Production, resulting in $7,000 profit.

- If Moderate Demand is known, then choose Minimum Production, resulting in $8,000
profit.

- If Good demand is known, then choose Maximum Production, resulting in $15,000
profit.

- If High Demand is known, then choose Maximum Production, resulting in $50,000 profit.


Decision Making Under Uncertainty

The text presents 5 techniques for decision making without probabilities. Decision makers are
often faced with situations for which the probabilities of events occurring cannot be reasonably
estimated. In these situations, more subjective decision analysis should be applied to the payoff
table. Although the following techniques are often used by decision makers, they are rarely
presented as formally as we have here.

The Maximax Criterion

The optimist uses this criterion. The maximax decision maker searches for the single best
outcome and makes the decision that could achieve that outcome. In this case it would be
Maximum Production.

Note that this criterion ignores all lesser outcomes from that decision, even though they still
could occur.

The Maximin Criterion

Heres the pessimists choice. Believing that the worst will happen, the maximin decision makes
locates the worst possible outcome for each decision:

Minimum Production $7,000
Average Production $(10,000)
Maximum Production $(25,000)

Then, the decision alternative with the best worst outcome is selected. Here, the Maximin
decision maker would select Minimum Production.

Note that this criterion ignores all possible better outcomes, even though they could occur.
21

The Criterion of Realism (Hurwicz)

This method introduces weighted average calculations. The decision maker using this criterion
must assign a value to their level of optimism. The value called the coefficient of realism, , and
can range from 0 to 1, representing 0% to 100% optimism.

In this example, we will assume the decision maker is 65% optimistic.

The decision maker constructs a table showing only the best and worst outcomes for each
decision. All other potential outcomes other than the best and worst are ignored in this
method.



The coefficient of optimism is applied to each best outcome and its complement (1 - .65) is
applied to each worst outcome, creating a weighted average outcome. The decision maker
chooses the decision alternative with the best weighted Average outcome, in this case
Maximum Production.

The Equal Likelihood Criterion (LaPlace)

This method treats all of the events as if they have an equal chance of occurring, creating the
following payoff table:



Once again, the decision maker would find the highest weighted average outcome, selecting
Minimum Production based on this method.


Event:
Best
Outcome
Worst
Outcome
Weighted
Average
Coefficient of Optimism 0.65 0.35
Decisions:
Minimum Production 10,000 $

7,000 $

8,950 $

Average Production 30,000 $ (10,000) $ 16,000 $
Maximum Production 50,000 $ (25,000) $ 23,750 $
Event:
Low
Demand
Moderate
Demand
Good
Demand
High
Demand
Weighted
Average
Likelihood 0.25 0.25 0.25 0.25
Decisions:
Minimum Production 7,000 $

8,000 $

10,000 $

10,000 $

8,750 $

Average Production (10,000) $

- $

10,000 $

30,000 $

7,500 $

Maximum Production (25,000) $

(10,000) $

15,000 $

50,000 $

7,500 $

22
The Minimax Regret Criterion

This method requires the construction of an opportunity loss or regret table. A regret table
begins with the best decision for each event state and measures the amount of monetary regret
from the best outcome to each of the other outcomes for that event state. The regret table for
this example is as follows:



This criterion next looks at the maximum regret for each decision:

Minimum production: $40,000
Average production: $20,000
Maximum production: $32,000

The decision maker using this method then selects the decision with the minimum of all the
maximum regrets, in this case Average Production.


Summary of preferred decisions using decision making without probability:

Criterion: Preferred decision: Key outcome value:

Maximax criterion Maximum production $50,000
Maximin criterion Minimum production $ 7,000
Hurwicz criterion ( = .65) Maximum production $23,750
Equal likelihood criterion Minimum production $ 8,750
Minimax regret criterion Average production $20,000


Note that the key outcome values do not represent the expected result of the decision. They
are merely the value used by the decision maker to select the preferred decision using the
respective criterion.

The actual outcome of each decision would still be dependent upon the state of market demand
that occurred after the production decision was made.
Event:
Low
Demand
Moderate
Demand
Good
Demand
High
Demand
Decisions:
Minimum Production - $ - $ 5,000 $ 40,000 $
Average Production 17,000 $ 8,000 $ 5,000 $ 20,000 $
Maximum Production 32,000 $ 18,000 $ - $ - $
23
Decision Making Under Risk

Most of the problems we will work with have probabilities assigned for each event. In practice
this is often done to help construct a payoff table and evaluate decision alternatives. It is always
useful to question the probabilities assigned to events when you are outside the classroom. The
decision analysis model is only as good as the accuracy of the event probabilities being used in
the model. The key concept for these problems is the Expected Monetary Value for each
decision alternative. This is calculated exactly the same way we calculated the expected value
of a discrete probability distribution in chapter 2.

In our example problem, suppose we assigned the following event probabilities:
Low demand .15
Moderate demand .25
Good demand .40
High demand .20

The full payoff table would now look like this:



The calculation of the expected value for the Minimum Production alternative would be:

(7,000 * .15) + (8,000 * .25) + (10,000 * .40) + (10,000 * .20) = 9,050

Based on these event probabilities, we would prefer the Maximum Production alternative, as it
has the highest expected value ($9,750).

When working with probabilities, we also can consider the concept of regret, or opportunity
loss, and construct an opportunity loss table as follows:



Again, the preferred decision would be maximum production, as it has the lowest expected
opportunity loss ($9,300).

Event:
Low
Demand
Moderate
Demand
Good
Demand
High
Demand
Expected
Value
Probability 0.15 0.25 0.40 0.20
Decisions:
Minimum Production 7,000 $ 8,000 $ 10,000 $ 10,000 $ 9,050 $
Average Production (10,000) $ - $ 10,000 $ 30,000 $ 8,500 $
Maximum Production (25,000) $ (10,000) $ 15,000 $ 50,000 $ 9,750 $
Event:
Low
Demand
Moderate
Demand
Good
Demand
High
Demand
Expected
Opportunity
Loss
Probability 0.15 0.25 0.40 0.20
Decisions:
Minimum Production - $ - $ 5,000 $ 40,000 $ 10,000 $
Average Production 17,000 $ 8,000 $ 5,000 $ 20,000 $ 10,550 $
Maximum Production 32,000 $ 18,000 $ - $ - $ 9,300 $
Must add to
1
24
Expected Value of Perfect Information

Decision makers are often presented with an opportunity to acquire information to help them
make better decisions. In order to understand the value of the information, there must be a
cost/benefit analysis performed on the information. One technique for doing this is the
calculation of the Expected Value of Perfect Information (EVPI).

What decision would be chosen if the decision maker knew which event would occur with
absolute certainty?

Clearly, the best decision for each event could be chosen. In this case:

If the event ________ was known Choose. Outcome

Low demand Minimum production $ 7,000
Moderate demand Minimum production $ 8,000
Good demand Maximum production $15,000
High demand Maximum production $50,000

The Expected Value WITH Perfect information is the sum of the products of each of these
outcomes and their respective event probabilities:

Event Outcome Probability Product
Low demand $ 7,000 .15 $ 1,050
Moderate demand $ 8,000 .25 $ 2,000
Good demand $15,000 .40 $ 6,000
High demand $50,000 .20 $10,000

Expected Value WITH Perfect Information $19,050

If a decision maker could achieve an outcome of $19,050 WITH perfect knowledge of the future,
then the value of knowing the future would be the difference between the expected value WITH
knowledge of the future and the expected value WITHOUT knowledge of the future:

Expected Value WITH Perfect Information: $19,050
Expected Value WITHOUT Perfect information: $ 9,750

Expected Value OF Perfect Information (EVPI): $ 9,300




25
Decision Trees

Decision trees display the time sequence of decisions and events and are important when the
decision making situation involves more than simply one decision followed by one event.

Decision trees are all constructed from a series of two symbols:

Decisions are represented square decision nodes


States of Nature (Events) are represented by circular event nodes


Consider the example beginning on page 83 of the text. The decision facing Mr. Thompson is to
choose among the construction of a large plant, construction of a small plant, or to do nothing.

This is represented as:











After he makes a decision, an event will happen. In this case the event, which is beyond his
control, is either a favorable or unfavorable market. The tree is expanded to reflect this as
follows:













Large
Small
Nothing
Large
Small
Nothing
favorable
favorable
unfavorable
unfavorable
26
The next step is to assign probabilities to each event node. The probabilities must sum to 1 at
each node.

Next, all the payoffs must be recorded in the tree and the terminal outcome calculated for each
branch of the tree.

At this point, the tree is fully constructed, but it is not solved.


200

-180

200

-180


0


Solving a decision tree requires two processes:

At each event node, calculate the EMV of the event. (SUMPRODUCT)
At each decision node, choose the best financial result.



200

-180

100

-20


0


The EMV of construction a large plant is $10,000 (200,000*.5) + (-180,000*.5).
The EMV of construction a small plant is $40,000 (100,000*.5) + (-20,000*.5).

The highest value is associated with the decision to construct a small plant. This is the best
decision.

Large
Small
Nothing
Unfavorable (.5)
Favorable (.5)
Favorable (.5)
Unfavorable (.5)
40
Large
Small
Nothing
1
0
4
0
Unfavorable (.5)
Favorable (.5)
Unfavorable (.5)
Favorable (.5)
27
Note that, at this point, a payoff table could be used as easily as a decision tree for this problem.
Decision trees become mandatory when a sequence of decisions and events develops as
illustrated in the second part of the problem.

The opportunity to conduct a market survey creates a new branch off the initial decision node
and expands the tree significantly.


200

-180

100

-20


0






190

-190

90

-30

-10






190

-190

90

-30

-10
49.2
Large
Small
Nothing
1
0
4
0
Unfavorable (.5)
Favorable (.5)
Unfavorable (.5)
106.4
Large
Small
Nothing
106.
4
63.
6
Unf (.22)(-$180k)
Fav (.78)($100k)
Unf (.22)(-$20k)
49.2
Fav (.78)($200k)
2.4
Large
Small
Nothing
-
87.4
2.
4
Unf (.73)(-$180k)
Fav (.27)($100k)
Unf (.73)(-$20k)
Fav (.27)($200k)
Conduct
Survey
Survey Fav.
(.45)(-$10k).
Survey Unf.
(.55)(-$10k).
Favorable (.5)
28
Reviewing this expanded tree, there are several important points to consider.

The outcome of the survey is an event. Mr. Thompson does not control it.
After receiving the report, Mr. Thompson must make a decision. The alternatives available
to Mr. Thompson are the same (Large, Small, Nothing) but the probabilities are different
depending on a favorable or unfavorable report.
The cost of the survey is subtracted from each branch and is reflected in the final payoff at
each endpoint.
The tree now has three separate decision points. The answer to the problem must address
Mr. Thompsons best alternative at each point.

The complete answer to this problem is,

Mr. Thompson should decide to conduct the survey. If it is favorable, he should build a large
plant. If it is unfavorable, he should build a small plant. The expected value of this decision is
$49,200.

Additionally, the maximum amount Mr. Thompson should be willing to pay for the survey is
$19,200 ($10,000 + ($49,200 - $40,000)).











29
Chapter 4 Linear Regression

Linear Regression

Objective: Linear regression is a tool used to analyze the relationship between multiple
variables when we suspect that one of the variables is dependent upon a single or
multiple independent variables.

The Simple Linear Regression Model

Predicting one thing (Y) from another (X).

The linear relationship is defined by the Population intercept (b
0
) and the Population
slope (b
1
).

The form of the equation is: Y = intercept + (slope)(X) or, Y = b
0
+ b
1
X.

Y is the dependent variable, the variable we wish to explain or predict.

X is the independent variable or the predictor variable.

The model has the following assumptions: The relationship between X and Y is a
straight-line relationship. The values of X are fixed, not random.

What we want to know

1. What is the best possible line (model) for the data?
2. Is there a statistically significant relationship between the dependent variable and any
of the independent variables in the model?
3. How much of the variability in the dependent variable is explained by the model?
4. Which independent variable (in a multiple regression) has a stronger or weaker
relationship with the dependent variable?
5. How accurate will the model be as an estimator or predictor of the dependent
variable?

What we want to do

1. Calculate point estimates for the predicted value of the dependent variable (Y) given
value(s) for the independent variable(s) (X).
2. Construct a rough 95% interval for the predicted value of the dependent variable (Y)
given value(s) for the independent variable(s) (X).
3. Understand the predicted change in the dependent variable that would result from a
change to any of the independent variable values.




30
What is the best possible line (model) for the data?

The example beginning on page 118 of your text provides the following data regarding
company sales for Triple A Construction and total local area payroll (a general indicator
of the regions economy).
Local Payroll
($100,000,000's)
Sales
($100,000's
3 6
4 8
6 9
4 5
2 4.5
5 9.5

Because we believe that the local payroll affects sales, we will use payroll as the
independent variable (X) and sales as the dependent variable (Y).

Approach #1 An Excel scatter plot with a trend line, equation and coefficient of
determination.

Excel 2007 Instructions:
Simple Linear Regression Charts
1. Highlight the data, making sure X is in the first column and Y is in the Second.

2. Select Insert and Scatter.


31
3. Select Scatter with only marker.


4. Right click on any data point and select Add Trendline. Left click a check mark in Display
Equation on chart and Display R-squared value on chart.


32
5. The following is produced.






6. Clean up and format to improve presentation.




y = 1.25x + 2
R = 0.6944
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7
S
a
l
e
s
Local Payroll
Triple A Construction
33
Approach #2 - Algebraic Solutions

The least squares method is used to calculate the estimated regression line. By
minimizing the sum of the squared differences between each data point and the
predicted Y for that point, we are applying the same concepts used in computing
variance to create a line that best fits the data.


Least squares = Best fit


We will now solve Triple A using the algebraic method using the formulas in the text.




The correct regression equation is Y = 2 + 1.25X and the coefficient of determination is
.6944.

This approach can be used for both simple linear regression and multiple linear
regression. However, the calculations for a multiple linear regression are beyond the
scope of this course and have been made less important because of the ease with which
Excel and other tools can conduct this analysis.


Triple A Construction
X = Local Payroll ($100,000,000s) (Independent Variable)
Y = Tiple A's Sales ($100,000's) (Dependent Variable)
X Y
3 6 -1 1 -1 1 1 5.75 0.0625 1.5625
4 8 0 0 1 0 1 7 1 0
6 9 2 4 2 4 4 9.5 0.25 6.25
4 5 0 0 -2 0 4 7 4 0
2 4.5 -2 4 -2.5 5 6.25 4.5 0 6.25
5 9.5 1 1 2.5 2.5 6.25 8.25 1.5625 1.5625
4 7 0 10 0 12.5 22.5 6.875 15.625
slope denominator slope numerator SST SSE SSR
Slope (b
1
) 1.25
Intercept (b
0
) 2.00
Determination (r
2
) 0.6944
) ( X X
2
) ( X X ) ( Y Y ) )( ( Y Y X X


2
) ( X X

) )( ( Y Y X X
2
) ( Y Y


2
) ( Y Y
Y

2
)

( Y Y


2
)

( Y Y X Y
2
)

( Y Y
( )
2

Y Y
34
Approach #3 Regression using Excel

Linear Regression on MS Excel 2007
1. Select Data

2. Select Data Analysis. If this does not appear, see the instructions for installing the data
analysis add-in.


3. Select Regression from alphabetical list and click on OK.


35
4. Complete the dialog box as shown.

5. Select OK and the following output will be produced. Format the column widths,
number of decimal places and other aspects to make the presentation clear and
professional.

SUMMARY OUTPUT
Regression Statistics
Multiple R 0.833333
R Square 0.694444
Adjusted R Square 0.618056
Standard Error 1.311011
Observations 6
ANOVA
df SS MS F Significance F
Regression 1 15.625 15.625 9.090909 0.039352
Residual 4 6.875 1.71875
Total 5 22.5
Coefficients Standard Error t Stat P-value Lower 95% Upper 95%Lower 95.0%Upper 95.0%
Intercept 2 1.742544 1.147747 0.31505 -2.83808 6.838077 -2.83808 6.838077
X 1.25 0.414578 3.015113 0.039352 0.098947 2.401053 0.098947 2.401053
The Output Range can be
either the first cell that should
contain the output in the current
worksheet or a new worksheet.
36
If you review the 3 approaches carefully, you will find that they all produce the same
results, including the slope, intercepts and coefficient of determination values for the
model. However, only the Excel output above produces the additional statistical
information that will permit us to fully understand the regression.

Interpreting the Regression Output
Is there a statistically significant relationship between the dependent variable and
any of the independent variables in the model?

This is analyzed be looking to the Significance F value above. It is the p-value of the
hypothesis test conducted on the model to determine whether X and Y are related. The
null hypothesis in a regression is that there is NO statistical relationship between Y and
any X variable. The p-value represents the probability that this statement may be true.
In this case there is a .0394, or about 4% probability that there is NO relationship
between Local Payroll and Triple As Sales.

How much of the variability in the dependent variable is explained by the model?

The coefficient of determination represents the percentage of total variability in Y that is
explained by the regression equation. It is our best measure of the fit of the regression
model. Determination is written as r
2
and its value is between 0 and 1. 0 indicates that
there is no relationship between X and Y and 1 indicates that the regression equation
PERFECTLY explains Y.

Which independent variable (in a multiple regression) has a stronger or weaker
relationship with the dependent variable?

This question is only important in a multiple linear regression containing more than 1
independent variable, but the principles can be applied here. The p-value associated
with the individual X variable, .0394 is once again the result of a hypothesis test. But
this test compares this single X to Y and again carries the null hypothesis that there is no
relationship between X and Y. Since this is a simple (1X) linear regression, this value is
the same as the Significance F. In a multiple regression, each X will have a different p-
value. P-values closer to 0 indicate a stronger relationship with Y and values closer to 1
indicate a weaker relationship with Y.

How accurate will the model be as an estimator or predictor of the dependent
variable?

The Standard Error of the estimate, 1.31, is a statistical measure of the deviation in
predicted values of Y. Using the principals learned in statistics, a 95% interval may be
developed for the predicted value of Y by calculating the value of Y, given Xs). This is
the point estimate and is the starting point for the interval. Since approximately 95% of
normally distributed population values should fall within 2 standard deviation of the
mean, we construct this rough interval as follows:

95% interval for predicted Y =

()( )


37
Calculate point estimates for the predicted value of the dependent variable (Y)
given value(s) for the independent variable(s) (X).


Point Estimate: A single value estimate of Y for a given X.

The equation for this example is

. Therefore if we believe that Local


Payroll (X) next year will be $550,000,000, we would predict the Triple A Sales for next
year would be:

()


Construct a rough 95% interval for the predicted value of the dependent variable
(Y) given value(s) for the independent variable(s) (X).

Using the same value for X, the 95% interval for the predicted value of Y would be:

()() * +

If Local Payroll next year is $550,000,000, then Triple As Sales should be between
$625,500 and $1,149,500.


Understand the predicted change in the dependent variable that would result from
a change to any of the independent variable values.

The slope, 1.25 indicates that the sales for Triple A will change by 1,25 for each 1 unit
change in local payroll. Each $100,000,000 increase in local payroll should result in a
$125,000 increase in Triple As sales. However, this is once again only an estimate
derived from sample data. Accordingly, we would prefer to express this potential change
as an interval. The Lower 95% and Upper 95% values for X help us with this.

The lower 95% value is 0.10 and the upper 95% value is 2.40. The 95% interval for the
predicted CHANGE in Y that would result from a 1 unit change in X would be {0.10,
2.40} of between $10,000 and $240,000.


38
Chapter 5 Forecasting

Forecasting Part 1 A Basic Introduction

Why Forecast?

The future is uncertain and yet we must make decisions today. The results of our decisions are
affected by that uncertainty. We worked with these concepts in decision analysis and tried to
understand how to make better decisions by applying a logical evaluation of the possible
outcomes and their respective probabilities.

Forecasting is a technique that helps us develop possible future outcomes. Accurate and well-
designed forecasts can significantly improve the quality of our decision-making and lead to
improved results in all areas of business.

In essence, forecasting uses data from the past to help see potential future results in an
organized and logical framework.

If you are interested in seeing the role of forecasting outside the classroom, take a look at this
website http://www.forecastingprinciples.com.


Forecasting - Steps for Success

The emphasis on taking a step-by-step, logical approach continues to be a major aspect of our
course. Once again, the general guidelines of the quantitative analysis approach presented in
Chapter 1 have been slightly re-worked to fit the specific needs of forecasting while still
maintaining and emphasizing the benefits of a structured approach to building analytical
models.

1. How will the forecast be used?

2. What are we trying to forecast?

3. What is the time horizon for our forecast?

4. What forecasting model is most appropriate?

5. Gather data.

6. Test the validity of the model.

7. Create the forecast.

8. Implement the results.

9. Analyze and refine the forecast over time.

39
Basic Types of Forecasts

There are three basic families of forecasting models. Each group contains a set of modeling
methods that tend to be suitable for specific types of forecast and specific types of data. The
text has a useful chart describing the three basic types on page 151.

Time-Series Models are one of the most frequently used tools in business forecasting because
we are always interested in predicting the future based on historical data. Time-Series models
use time as the most critical factor in preparing forecasts by relying on the assumption that the
future will at least partially a function of what happened in the past. The specific tools used in
time-series models range from very simple moving averages to weighted moving averages and
exponential smoothing. These models then advance to include the use of regression principles
for trend projection and very detailed decomposition techniques for complex data.

Causal Models rely upon the assumption that we can identify factors that cause or influence the
behavior of the data we wish to forecast. This is basically the same as the relationship between
the independent variables and the dependent variable that we recently worked with in
regression analysis. The causal model approaches we use in forecasting will be identical to the
work we did in regression.

Qualitative Models recognize that there are often forecasting situations that do not lend
themselves to a purely quantitative forecasting tool. These models often combine judgment
with some data that is not quite as quantitative as historical information. The specific
approaches that fall into this category often include the use of outside experts and survey data
from internal or external sources. These approaches are usually iterative, requiring several
rounds of discussion and refinement to produce a forecast that is useful.

Using Scatter diagrams and Times Series
Moving
Average
Exponential
Smoothing
Trend Projections
Time-Series Methods: include
historical data over a time interval
Forecasting Techniques
No single method is superior
Delphi
Methods
Jury of Executive
Opinion
Sales Force
Composite
Consumer
Market Survey
Qualitative Models: attempt
to include subjective factors
Causal Methods: include a
variety of factors
Regression
Analysis
Multiple
Regression
Decomposition
40
The most basic tool for examining time series data is a scatter diagram. This diagram is
constructed and used exactly as it was done for simple linear regression analysis. When
constructing a time series scatter diagram, the data we wish to forecast is identified as the
dependent variable and plotted on the y-axis and the time series is the independent variable
plotted along the x-axis.

To keep this simple, we will ignore the regression possibilities at this point and simply use our
eyes or the chart wizard to help fit a line to the data and then forecast that line into the future.

The data and scatter diagrams provided on pages 152 and 153 of the text demonstrate the
application of simple plots to time series data.



Scatter Diagrams
0
50
100
150
200
250
300
350
400
450
0 2 4 6 8 10 12
Time (Years)
A
n
n
u
a
l

S
a
l
e
s
R
ad
ios
Televisions
C
o
m
p
a
ct D
iscs
Scatter diagrams are helpful when forecasting time-series data
because they depict the relationship between variables.
41
Which forecast is likely to be more accurate?

With all of the different forecasting tools available to us, how can we begin to evaluate the
potential future accuracy of any method compared to another? Once again, we will use
historical data to consider the potential future reliability of the forecast. The basic technique is
called mean absolute deviation and requires that we compare each actual value to the value
that would have been forecast for that period using the chosen forecasting model. By averaging
(mean) the absolute values of the deviations between the forecast and actual amounts, we can
compare the relative accuracy of different forecasting models.

Lets look at this calculation and consider the forecast for sales of CD players using the data on
page 152. A scatter diagram of the data, including a forecasting line, looks like this:


From this forecasting line we can estimate the forecast sales levels for the 10 years and compare
them to the actual sales levels. We can also compare the relative accuracy of this forecasting
method to the accuracy of the nave forecast described on page 154 that uses the prior years
sales level for the next years forecast. The resulting calculations are as follows:



The lower M.A.D. value for the scatter diagram forecast indicates that this forecasting method is
more accurate than the nave forecast for this data.
CD Players
90
100
110
120
130
140
150
160
170
180
190
200
210
1 2 3 4 5 6 7 8 9 10
Year
S
a
l
e
s
Year Actual Sales Nave Forecast Absolute Deviation
Scatter Diagram
Forecast Absolute Deviation
1 110 104 6
2 100 110 10 115 15
3 120 100 20 126 6
4 140 120 20 137 3
5 170 140 30 148 22
6 150 170 20 158 8
7 160 150 10 169 9
8 190 160 30 180 10
9 200 190 10 191 9
10 190 200 10 202 12
Total 160 100
Count 9 10
M.A.D. 17.78 10.00
42
Forecasting Part 2 The Real Deal

Decomposition

Decomposition refers to the practice of analyzing historical data in a way that separates out the
four possible components of any data stream.

Data = Trend * Seasonality * Cyclicality * Random Events

We will focus on separating trend from seasonality, the two most frequent and important
aspects of decomposition. The specific forecasting tools we will use to accomplish this are the
deseasonalization of data using centered moving averages and seasonal indexes and the
application of multiple linear regression techniques to data that contains both seasonality and
trend.


Moving Averages / Weighted Moving Averages

Some of the more basic forecasting tools available include the moving and weighted moving
average methods. These methods forecast based on a selected number of prior periods and
may also assign varying weights to each period. The selection of both the number of periods
and the weighting of each period is judgmental and should be made with the goal of minimizing
the M.A.D. for the forecast. The example in the text calculates a three-month moving average
for Wallace Garden supply as well as a three-month weighted moving average. The math
behind these calculations is illustrated on pages 158 and 159 and below:


Problem 5-12 Weight 1.00
2.00
Wallace Garden Supply 3.00
6.00
Month
Actual Shed
Sales
3 month
Moving
Average
Mean Absolute
Deviation
3 month
Weighted
Moving
Average
Mean Absolute
Deviation
January 10.00
February 12.00
March 13.00
April 16.00 11.67 4.33 12.17 3.83
May 19.00 13.67 5.33 14.33 4.67
June 23.00 16.00 7.00 17.00 6.00
July 26.00 19.33 6.67 20.50 5.50
August 30.00 22.67 7.33 23.83 6.17
September 28.00 26.33 1.67 27.50 0.50
October 18.00 28.00 10.00 28.33 10.33
November 16.00 25.33 9.33 23.33 7.33
December 14.00 20.67 6.67 18.67 4.67
M.A.D. 6.48 5.44
10 + 12 + 13 = 35
35 / 3 = 11.67
| 16 - 11.67 | = 4.33
Compute the average of
this column.
[(10*1)+(12*2)+(13*3)]/6 = 12.17
43
Exponential Smoothing

This forecasting method employs the use of a smoothing constant to adjust the forecast for each
period for some portion of the prior periods forecasting error. The choice of the appropriate
smoothing constant is judgmental and should be design to minimize the M.A.D. of the forecast.
The basic formula for is as follows:

Forecast for period 2 = F
2
= F
1
+ ((A
1
F
1
)*C)

The basic spreadsheet format for an exponential smoothing forecast will look like this:



Previous forecast error = Previous actual Previous forecast
Current rounded forecast = Prev. forecast + (Prev. forecast error * smoothing constant)

Exponential Smoothing - Step by Step
Smoothing constant 0.3
Year 1 Forecast 5000
Year
Actual
Demand
Previous
Forecast
Error
Smoothing
Constant
Forecast
Adjusment
Current
Rounded
Forecast
Rounded
Forecast MAD
1 4,000 5,000 5,000.00 1,000.00
2 6,000 -1,000 0.30 -300 4,700 4,700.00 1,300.00
3 4,000 1,300 0.30 390 5,090 5,090.00 1,090.00
4 5,000 -1,090 0.30 -327 4,763 4,763.00 237.00
5 10,000 237 0.30 71 4,834 4,834.10 5,165.90
6 8,000 5,166 0.30 1,550 6,384 6,383.87 1,616.13
7 7,000 1,616 0.30 485 6,869 6,868.71 131.29
8 9,000 131 0.30 39 6,908 6,908.10 2,091.90
9 12,000 2,092 0.30 628 7,536 7,535.67 4,464.33
10 14,000 4,464 0.30 1,339 8,875 8,874.97 5,125.03
11 15,000 5,125 0.30 1,538 10,412 10,412.48 4,587.52
2,437.19
44
Linear Trend Forecasting

This method brings us back to simple linear regression. The dependent variable, y, is the data
we wish to forecast and the independent variable, x, is time.

In general, it makes sense to label the time series starting with 1 even if the series is
presented in real years (2002).

Using the data from Wallace Garden Supply (5-14), a linear trend forecast would look like this:



The accuracy of this forecast can be measured by using the regression equation to compute
predicted demand for each year and computing M.A.D. as in previous examples.


Year
Actual
Demand
1 4,000
2 6,000
3 4,000
4 5,000
5 10,000
6 8,000
7 7,000
8 9,000
9 12,000
10 14,000
11 15,000
Wallace Garden Supply
y = 1054.5x + 2218.2
R
2
= 0.8225
0
2,000
4,000
6,000
8,000
10,000
12,000
14,000
16,000
1 2 3 4 5 6 7 8 9 10 11 12
Years
D
e
m
a
n
d
Year
Actual
Demand
Forecast
Demand M.A.D.
1 4,000 3,272.70 727.30 Slope 1,054.50
2 6,000 4,327.20 1,672.80 Intercept 2,218.20
3 4,000 5,381.70 1,381.70
4 5,000 6,436.20 1,436.20
5 10,000 7,490.70 2,509.30
6 8,000 8,545.20 545.20
7 7,000 9,599.70 2,599.70
8 9,000 10,654.20 1,654.20
9 12,000 11,708.70 291.30
10 14,000 12,763.20 1,236.80
11 15,000 13,817.70 1,182.30
1,385.16
45
Forecasting Basics Wrap-up:

Lets examine the relative accuracy of the three main methods we have used thus far. We will
continue to use the Wallace Garden Supply data from problem 5-14 and we will compare the
accuracy of a three-year weighted moving average forecast, an exponential smoothing forecast
and a linear trend forecast. For simplicity, we will use weights of 1, 2, and 3 and a smoothing
constant of 0.5 as used in the examples on pages 159 162.



We can improve upon this by adjusting the weighting and the smoothing constant to optimize
the accuracy of those forecasts. Doing so produces the following:


Wallace Garden Supply
Year
Actual
Demand
3 Year
Weighted
Moving
Average M.A.D.
Exponential
Smoothing M.A.D.
Linear
Trend M.A.D.
1 4,000 5,000 1,000 3,273 727
2 6,000 4,500 1,500 4,327 1,673
3 4,000 5,250 1,250 5,382 1,382
4 5,000 4,667 333 4,625 375 6,436 1,436
5 10,000 4,833 5,167 4,813 5,188 7,491 2,509
6 8,000 7,333 667 7,406 594 8,545 545
7 7,000 8,167 1,167 7,703 703 9,600 2,600
8 9,000 7,833 1,167 7,352 1,648 10,655 1,655
9 12,000 8,167 3,833 8,176 3,824 11,709 291
10 14,000 10,167 3,833 10,088 3,912 12,764 1,236
11 15,000 12,500 2,500 12,044 2,956 13,818 1,182
2,333 2,086 1,385
Weights 1 Smoothing Slope 1,054.55
2 Constant 0.5 Intercept 2,218.18
3
6
Wallace Garden Supply
Year
Actual
Demand
3 Year
Weighted
Moving
Average M.A.D.
Exponential
Smoothing M.A.D.
Linear
Trend M.A.D.
1 4,000 5,000 1,000 3,273 727
2 6,000 4,300 1,700 4,327 1,673
3 4,000 5,490 1,490 5,382 1,382
4 5,000 4,333 667 4,447 553 6,436 1,436
5 10,000 5,000 5,000 4,834 5,166 7,491 2,509
6 8,000 8,167 167 8,450 450 8,545 545
7 7,000 7,833 833 8,135 1,135 9,600 2,600
8 9,000 7,667 1,333 7,341 1,659 10,655 1,655
9 12,000 8,500 3,500 8,502 3,498 11,709 291
10 14,000 10,667 3,333 10,951 3,049 12,764 1,236
11 15,000 12,833 2,167 13,085 1,915 13,818 1,182
2,125 1,965 1,385
Weights 1.00 Smoothing Slope 1,054.55
1.00 Constant 0.7 Intercept 2,218.18
4.00
6.00
46
Finally, we can graphically represent the actual demand and each of the 3 forecasts and look at
how each forecast fits the actual historical data.



The conclusion that should be reached by our review of M.A.D. and from our review of this chart
is that, for this data, a linear trend forecast would be the best choice.


Wallace Garden Supply
0
2,000
4,000
6,000
8,000
10,000
12,000
14,000
16,000
1 2 3 4 5 6 7 8 9 10 11 12
Years
D
e
m
a
n
d
Actual 3yr WMA Exp. Smooth. Linear Trend
47
Forecasting Techniques
A Simplified Approach to Seasonality

The general objective when trying to create a forecast of seasonal data is to separate the
seasonality from the underlying trend and develop an accurate trend forecast which can then be
adjusted for seasonality.

Both decomposition and multiple regression can be effective tools for this, but neither provides
a simple or intuitive approach that clearly explains the process of separating seasonality from
trend.

This example is my attempt to more clearly explain these concepts. I hope it helps.

We are presented with data for the last 4 years of quarterly sales tax revenues collected by the
State of Texas.



The very first thing we should do is put together a quick graph of the data to see what story it
tells. In order to do that, we will need to set up the data as a time series of 16 data points. The
raw data and graph would be as follows:


Quarter Year 1 Year 2 Year 3 Year 4
1 218 225 234 250
2 247 254 265 283
3 243 255 264 289
4 292 299 327 356
Total 1000 1033 1090 1178
Quarter
Number
Tax
Revenue
1 218
2 247
3 243
4 292
5 225
6 254
7 255
8 299
9 234
10 265
11 264
12 327
13 250
14 283
15 289
16 356
Texas Sales Tax Revenues
200
250
300
350
400
1 5 9 13
Quarter
R
e
v
e
n
u
e
48
Our first observation from the chart should be that the data appears to have consistent,
seasonal patterns in which the first quarter of the year is low and the fourth quarter is high. The
pronounced seasonality in this data will make the basic forecasting tools ineffective as shown in
the following linear regression of the raw data.



A linear regression of this data only explains 41% of the variability in revenues and would clearly
do a poor job of forecasting future quarterly revenue because it ignores the seasonality within
the data. In order to do a better job, we must deal with seasonality and trend separately.

Texas Sales Tax Revenues
y = 5.01x + 226.23
R
2
= 0.41
200
250
300
350
400
1 5 9 13
Quarter
R
e
v
e
n
u
e
49
The easiest way to eliminate seasonality from data is to eliminate the seasons! Lets look at the
annual totals and a linear regression of the annual data.


It should be evident that the annual data has a linear trend and that simple linear regression
does an excellent job of describing this trend. We should feel comfortable that we can
accurately predict annual sales tax revenues using this model. Accordingly, we can predict that
annual revenues for year 5 will be:

59.10(5) + 927.5 = 1,223

Now that we have a good projection for next years revenues, we must develop a good way of
allocating it among the 4 quarters of the next year.
Year
Number
Tax
Revenue
1 1000
2 1033
3 1090
4 1178
Texas Sales Tax Revenues
y = 59.10x + 927.50
R
2
= 0.96
800
850
900
950
1000
1050
1100
1150
1200
1 2 3 4
Year
R
e
v
e
n
u
e
50
An intuitive approach to this revolves around a simple calculation of the average revenues
collected in each quarter over the 4 years in the data set.



Based on simple averages, we can allocate our annual projection of $1,223 to each quarter as
follows:



We now have a simplified projection of quarterly sales tax revenues for next year.

But how might it compare to a more sophisticated multiple regression model?

Lets look.

Texas Sales Tax Revenues ($millions)
Quarter Year 1 Year 2 Year 3 Year 4
Quarterly
Average
Average
Percent
1 218 225 234 250 231.75 21.55%
2 247 254 265 283 262.25 24.39%
3 243 255 264 289 262.75 24.44%
4 292 299 327 356 318.50 29.62%
1075.25 100.00%
Texas Sales Tax Revenues ($millions)
Quarter
Average
Percent
Annual
Projection
Quarterly
Projection
1 21.55% 1223.00 263.59
2 24.39% 1223.00 298.29
3 24.44% 1223.00 298.85
4 29.62% 1223.00 362.27
100.00% 1223.00
51
Multiple Regression for Forecasting Seasonal Data

Heres a full picture of the data setup and results of a multiple regression analysis that employs
categorical variables for each season.


Year Quarter Number Q1 Q2 Q3 Amount
1 1 1 1 0 0 218
1 2 2 0 1 0 247
1 3 3 0 0 1 243
1 4 4 0 0 0 292
2 1 5 1 0 0 225
2 2 6 0 1 0 254
2 3 7 0 0 1 255
2 4 8 0 0 0 299
3 1 9 1 0 0 234
3 2 10 0 1 0 265
3 3 11 0 0 1 264
3 4 12 0 0 0 327
4 1 13 1 0 0 250
4 2 14 0 1 0 283
4 3 15 0 0 1 289
4 4 16 0 0 0 356
SUMMARY OUTPUT
Regression Statistics
Multiple R 0.9842
R Square 0.9687
Adjusted R Square 0.9574
Standard Error 7.67
Observations 16.00
ANOVA
df SS MS F Significance F
Regression 4.00 20055.20 5013.80 85.21 0.00
Residual 11.00 647.24 58.84
Total 15.00 20702.44
CoefficientsStandard Error t Stat P-value Lower 95% Upper 95%
Intercept 281.56 5.75 48.94 0.0000 268.90 294.22
Number 3.69 0.43 8.61 0.0000 2.75 4.64
Q1 -75.67 5.57 -13.57 0.0000 -87.94 -63.40
Q2 -48.86 5.49 -8.90 0.0000 -60.95 -36.78
Q3 -52.06 5.44 -9.57 0.0000 -64.03 -40.08
Quarterly Projection for Year 5
281.56
Coefficients 3.69 -75.67 -48.86 -52.06
Quarter Quarter # Q1 Q2 Q3 Projected Revenue
1 17 1 0 0 268.69
2 18 0 1 0 299.19
3 19 0 0 1 299.69
4 20 0 0 0 355.44
52
Lets compare.



While these projections match up very well and I am reasonably confident that the differences
between them would not be significant in most situations, this is not a repudiation of multiple
regression or other sophisticated forecasting techniques. Rather, it is a reminder that we should
seek to keep our work simple and clear when the data allows us to do so.

There are many situations requiring more powerful forecasting tools and more complex models,
but we can often gain a better understanding of data with simple tools and then proceed to
tackle the problem with more sophistication because we understand what needs to be done and
not simply because we memorized a specific tool.










Texas Sales Tax Revenues ($millions)
Quarter
Average
Percent
Annual
Projection
Quarterly
Projection
Multiple
Regression
Forecast
Percentage
Difference
1 21.55% 1223.00 263.59 268.69 -1.90%
2 24.39% 1223.00 298.29 299.19 -0.30%
3 24.44% 1223.00 298.85 299.69 -0.28%
4 29.62% 1223.00 362.27 355.44 1.92%
100.00% 1223.00 1223.00
53
Chapter 7 Linear Programming

Linear Programming is a problem-solving approach that has been developed to help managers
make better decisions.

All linear programming models we will study are Constrained Optimization models. In each
problem, the firm will seek to optimize the value of some desired financial outcome (profit,
revenue or cost). However, the firms ability to optimize is constrained by limitations on the
inputs needed to produce the desired outcome. Examples of common constraints include:
Limited production resources (materials, labor, capital)
Limited customer demand
Minimum production requirements (Orders already accepted)
Required product mix levels

The successful application of linear programming to solve business problems requires an ability
to construct a mental model of each business and then translate that mental model into a set of
linear equations that describe all the factors that will influence the firms decision.

Our study of linear programming will concentrate on the development of five critical skills:

1. Understanding the business problem.

2. Formulating the business problem as a set of equations.

3. Graphical solutions to two-variable problems.

4. MS Excel solutions to multi-variable problems.

5. Management interpretation of results.

Using the Problem Solving Methodology on the next page will help organize your work and
guide you through these problems. As always, there is simply no substitute for hard work and
repetition in order to develop these skills.


54
Linear Programming
Problem Solving Methodology

1) Read and understand the basic business problem.
What does the firm do?
What financial objective does it want to achieve?
What decisions must it make?

2) Describe the business OBJECTIVE desired by the firm.
Optimization is always the goal.
Maximize profit
Maximize revenue
Minimize cost

3) Identify and name the DECISION VARIABLES.
What decision does the firm control?
Label each decision variable.

4) Identify and describe each CONSTRAINT.
What are the limits, or requirements that the firm must adhere to?
Dissect every sentence in the problem.
Describe the impact of every constraint on the decision variables.

5) FORMULATE the OBJECTIVE FUNCTION as a linear equation using the DECISION VARIABLES.
Find the coefficient for each decision variable.
Convert the objective into an equation.
(From the Flair Furniture example in section 7.3.)
MAX Profit: Z=70T + 50C

6) FORMULATE each CONSTRAINT as a linear inequality using the DECISION VARIABLES.
Find the coefficients for each decision variable in each constraint.
Convert each constraint into an inequality.
(From the Flair Furniture example in section 7.3.)
Carpentry: 4T + 3C 240
Painting and varnishing: 2T + 1C 100

7) Choose and appropriate solution method.
Two variable problems can be solved graphically.
All problems can be solved using MS Excel Solver.

8) Solve the problem for the optimal quantities of the DECISION VARIABLES.
Identify the quantity of each decision variable that provides the optimal solution to the
objective function.

9) Calculate the value of the OBJECTIVE FUNCTION at the optimal solution point.
Identify the optimal profit, revenue or cost that is produced by the optimal
solution.

55
Linear Programming
Solution Approaches to 2-variable problems

All problems containing two decision variables can be solved graphically. The graphical solution
approach follows the following steps:

1.) Graph EACH constraint by solving for its intercepts.

For the equation 7X + 3Y 210 we would,

Substitute 0 for X, producing
7(0) + 3Y 210
3Y 210
Y 70

Substitute 0 for Y, producing
7X + 3(0) 210
7X 210
X 30

This equation is defined by the line extending from (0, 70) to (30, 0).


2.) Locate the feasible region defined by the constraint. The feasible region is that portion
of the graph that contains solutions which do not violate the constraint equation.

Because the equation is a the feasible region is below the line, heading toward (0,0).

3.) Repeat the above steps for each constraint equation.

4.) Locate the feasible region for the problem by eliminating all areas of the graph that
violate one or more constraints. The feasible region will contain the set of all possible
solutions that satisfy all of the constraints.

5.) The optimal solution will always lie at one of the extreme points along the frontier, or
border formed by the intersections of the constraints and/or the X and Y axes. Find the
(X, Y) coordinates for each of the intersecting points along the frontier.

6.) Solve the objective function equation for each of the coordinate sets. The coordinate
set with the optimal result will be the optimal solution to the problem.
56
Linear Programming
ISO-profit line solutions.

The ISO-profit line method for solving 2 variable problems is helpful when there are multiple
possible solutions. It also provides an excellent visual reference for understanding the problem.
However, this solution can prove difficult for some students because it requires visualization
skills and facility with graphing.

Start with a graph that has all constraints and the feasible region identified. This is where you
are at the end of step 4 above.

Choose a small value for the solution to the objective function that lies somewhere in the
feasible region.

If the objective function is 10X + 8Y = Z (max profit) and the coordinates (4, 5) lie in the feasible
region, it would make sense to select 80 [10(4) + 8(5)] as the starting point for the ISO-profit
line.

Using 10X + 8Y = 40, we then graph this line the same way we graphed the constraint lines:

Substitute 0 for X, producing
10(0) + 8Y = 80
8Y = 80
Y = 10

Substitute 0 for Y, producing
10X + 8(0) = 80
10X = 80
X = 8

The starting ISO=profit line is defined by the points (0,10) and (8,0).

This line can then be moved toward the frontier of the feasible region by maintaining a constant
slope for the line. Continue moving the ISO-profit line until it is touching the very last point in
the feasible region. That point is the optimal solution to the problem.
57
Linear Programming
Sample Problem Graphical Solution

The Pinewood Furniture Company produces chairs and tables from two resources labor and
wood. The company has 80 hours of labor and 36 pounds of wood available each day. Demand
for the chairs is limited to 6 per day. Each chair requires 8 hours of labor and 2 pounds of wood
to produce, whereas a table requires 10 hours of labor and 6 pounds of wood. The profit
derived from each chair is $400 and from each table $100. The company wants to determine
the number of chairs and tables to produce each day in order to maximize profit.


1.) The business objective is to maximize overall profit by finding the optimal production
quantity for each product, without violating any of the constraints.

2.) The decision variables are the two products, Chairs C and Tables T.

3.) The firms ability to maximize profit is constrained by the limited availability of labor and
wood as well as the limited demand for chairs. There are only 80 labor hours available
per day. There are only 36 pounds of wood available per day. The demand for chairs is
limited to a maximum of 6 per day.

4.) The objective function for this problem is:
Maximize Profit: Z = $400(C) + $100(T)

5.) The linear inequalities that represent each of the constraints are:
CONSTRAINTS
Labor: 8(C) + 10(T) 80 hours
Wood: 2(C) + 6(T) 36 pounds
Demand: 1(C) 6 units

The complete formulation for this problem is:

Maximize: Z = $400(C) + $100(T)
Subject to: 8(C) + 10(T) 80 hours
2(C) + 6(T) 36 pounds
1(C) 6 units
58
6.) Graphical Solution
a. Plot the line for the labor constraint.
b. Plot the line for the wood constraint.
c. Plot the line for the demand constraint.
d. Identify the feasible region.
e. List the coordinates of the possible solution points.
f. Solve the objective function for the possible solution points.
g. The highest profit achieved is the optimal solution.



The optimal profit would be achieved by producing 6 Chairs and 3.2 Tables.

Profit: Z = $400(6) + $100(3.2) = 2400 + 320 = 2720

Labor: 8(6) + 10(3.2) 80 48 + 32 80 80 80 There is no slack.

Wood: 2(6) + 6(3) 36 12 + 18 36 30 36 There are 6 units remaining.

Demand: 1(6) 6 6 6 There is no slack.

The optimal solution fully utilizes both labor and demand.
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
T
a
b
l
e
s
Chairs
Pinewood
Labor Wood Demand C
Feasibl
e
59
Linear Programming
MS Excel Solutions

We will be generally following the method shown in section 7.5 for the development of Excel
solutions to linear programming problems. I recommend that all students use the following
approach:

The development of linear programming models and solutions using MS Excel begins when the
algebraic formulation of the problem is complete.

As an example, lets consider problem 7-40 in our text. The complete formulation of this
problem is as follows:




Using Excel Solver for Linear Programming problems:

Section 7.5 in our text does a very good job of explaining the use of this add-in on pages 271 -
275. Please understand that there are an infinite number of ways to set up a linear
programming problem to properly work with the Solver. The approach I am providing below will
be used in class. Although slightly different from the text, it addresses all of the same concepts
and issues and also allows us to work directly from the Formulation of any problem.

This may require a lot of practice, so please make good decisions about the number of problems
you need to solve in order to build up your strength, speed and accuracy with this technique.
Objective Function:
Max Profit: 9A + 12B + 15C + 11D = Z

Constraints:
Wiring: .5A + 1.5B + 1.5C + 1D 15,000
Drilling: .3A + 1B + 2C + 3D 17,000
Assembly: .2A + 4B + 1C + 2D 26,000
Inspection: .5A + 1B + .5C + .5D 12,000
Min Dem A: 1A + 0B + 0C + 0D 150
Min Dem B: 0A + 1B + 0C + 0D 100
Min Dem B: 0A + 0B + 1C + 0D 300
Min Dem C: 0A + 0B + 0C + 1D 400
60
Our spreadsheet is divided into 3 sections. At the top is a description of each decision variable
and its objective function coefficient. Remember that the objective function coefficient may
relate to profit, revenue or cost, depending on the appropriate business objective for the
problem. In this case, the spreadsheet is formatted as follows:

The top section of the spreadsheet matches the objective function. The second section of the
spreadsheet deals with all of the constraints in the problem. The third section of the
spreadsheet leaves space for the solution that Excel will develop. The optimal production
quantities of each decision variable, as well as the value of the objective function achieved at
the optimal solution are recorded here. The format is as follows:

Note that the LHS and SLACK cells are blank. These cells will require formulas that we will
create later.

Note that the cells are left empty. The Profit cell will require a formula and the Optimal
Quantity cells will be filled in by the Excel Solver and will display the optimal production
quantity of each product.

At this point, the spreadsheet for 7-38 should look as follows:



As an overall guide, the LHS and profit cells, highlighted in yellow, will require SUMPRODUCT
formulas referencing the optimal solution quantities in all cases and the relevant coefficient
ranges for each formula.

The SLACK cells, highlighted in green, will require simple (=RHS LHS) formulas.

The Optimal Quantity cells, highlighted in blue, are left blank. The Solver will fill these cells with
the optimal solution quantities and the other formulas will react automatically to these
quantities.

A B C D E F G H I
1 OBJECTIVE A B C D
2 Profit per unit 9.00 $ 12.00 $ 15.00 $ 11.00 $
3
4 CONSTRAINTS A B C D LHS SIGN RHS SLACK
5 Wiring 0.5 1.5 1.5 1.0 15000
6 Drilling 0.3 1.0 2.0 3.0 17000
7 Assembly 0.2 4.0 1.0 2.0 26000
8 Inspection 0.5 1.0 0.5 0.5 12000
9
10 MIN XJ210 1.0 150
11 MIN XM897 1.0 100
12 MIN TR29 1.0 300
13 MIN BR788 1.0 400
14
15 Optimal Quantity
16
17 Profit
61


Writing the Formulas: There are 2 basic formulas that we need to create in this model. First,
we need a formula that will calculate the result of the objective function. This formula will be
located in the Objective Function Result cell in the third section of the spreadsheet. This
formula will always calculate the result of the objective function, even as the production
quantities or profit/revenue/cost per unit may change. It is critical to use cell references and
not values in this formula.

Cell B17 = (b2 * b15) + (c2 * c15) + (d2 * d15) + (e2 * e15) = Objective Function Result

This formula can be simplified by using the SUMPRODUCT function in Excel. Cell B17 should
contain the formula =SUMPRODUCT(B15:E15, B2:E2)

The next formulas calculate the actual usage (LHS) for each of the constraints in the problem.
Once again, we can develop the Excel formulas from the formulation of the problem. For
example,

Cell F5 = (b5 * b15) + (c5 * c15) + (d5 * d15) + (e5 * e15) = Actual Wiring Hours Used

This is an example of just one of the constraints in the problem. The SUMPRODUCT function can
again be used to greatly simplify the formula and also to permit us to write it one time and copy
it properly to all of the other LHS cells.

Cell F5 should contain the formula =SUMPRODUCT($B$15:$E$15, B5:E5)

Note that the first array, corresponding to the optimal quantity cells, has been set as an
absolute cell reference (the F4 key). This permits us to copy this formula to all the other LHS
cells and retain all the correct cell references. After this formula is properly constructed for the
first constraint, it should be copied to all of the remaining LHS cells.

Finally, we need a simple formula to calculate the Slack for each constraint.

Cell I5 should contain the formula =H5-F5.
A B C D E F G H I
1 OBJECTIVE A B C D
2 Profit per unit 9.00 $ 12.00 $ 15.00 $ 11.00 $
3
4 CONSTRAINTS A B C D LHS SIGN RHS SLACK
5 Wiring 0.5 1.5 1.5 1.0 15000
6 Drilling 0.3 1.0 2.0 3.0 17000
7 Assembly 0.2 4.0 1.0 2.0 26000
8 Inspection 0.5 1.0 0.5 0.5 12000
9
10 MIN XJ210 1.0 150
11 MIN XM897 1.0 100
12 MIN TR29 1.0 300
13 MIN BR788 1.0 400
14
15 Optimal Quantity
16
17 Profit
62
When all the formulas are written, the worksheet will look like this:




Applying MS Excel Solver:

To open the solver dialog box, Data and look for Solver in the far right of the ribbon. Click on
it and the dialog box below.



The dialog box begins with Set Target Cell: This is the Profit cell in the spreadsheet, cell B17
in our example.

Next, select the appropriate button (Max, Min or Value of): for 7-40 select Max. Solver has now
been instructed to make sure the value of the objective function result cell for 7-40 is
maximized. It now understands our measurable business objective!

Click in the By Changing Cells: box. In this box, we will specify the cell range that represents
Optimal Production Quantities of our decision variables. Solver will change the value of these
cells automatically when we run the Solver until the value of the objective function result is
Optimized. This should contain cells $B$15:$E$15.
A B C D E F G H I
1 OBJECTIVE A B C D
2 Profit per unit 9.00 $ 12.00 $ 15.00 $ 11.00 $
3
4 CONSTRAINTS A B C D LHS SIGN RHS SLACK
5 Wiring 0.5 1.5 1.5 1.0 0 15000 15,000
6 Drilling 0.3 1.0 2.0 3.0 0 17000 17,000
7 Assembly 0.2 4.0 1.0 2.0 0 26000 26,000
8 Inspection 0.5 1.0 0.5 0.5 0 12000 12,000
9
10 MIN XJ210 1.0 0 150 150
11 MIN XM897 1.0 0 100 100
12 MIN TR29 1.0 0 300 300
13 MIN BR788 1.0 0 400 400
14
15 Optimal Quantity
16
17 Profit 0
63
Finally, we must tell Solver about our problem constraints. To do so, click on the Add button.
A smaller dialog box will open and we will put each constraint into the model individually. The
new dialog box looks like this:



The Cell Reference box will refer to the LHS cell for each constraint.

The inequality box must show the correct inequality. Use the drop-down button to select the
appropriate inequality.

The Constraint: box will be the RHS cell for each constraint:

Each constraint can be entered individually, or they can be entered as groups as long as all
constraints in the group have the same inequality.

The Wiring constraint in our example would be entered like this:




The completed dialog box for our example looks like this:


64

After all the constraints are entered and have been CAREFULLY checked for accuracy, click on
Options and place a mark in the boxes labeled Assume Linear Model and Assume Non-
Negative.




Click on OK and you will return to the initial dialog box. You are now ready to click on Solve
and see what the Solver produces.

Running the Solver successfully causes a new dialog box to pop up. If everything is perfect, you
will get a message telling you that solver has found a solution.



It is possible to get this message and not have the correct answer because Solver only does what
you tell it to do. You can enter faulty instructions or bad data and Solver will be able to solve
the problem, but it will not be solving it correctly.

Remember, Solver brings speed and accuracy to the process. You must bring judgment,
accuracy, logic and common sense.

65
If there are critical problems in the worksheet that prevent Solver from running correctly, you
will get a variety of messages telling you that Solver could not work.

I assure you that your particular version of Solver is not faulty. These messages occur because
the worksheet is incorrect or because the instructions that you gave to the Solver are incorrect.

Do not get frustrated at the tool. It is doing EXACTLY what you told it to do!

Once the Solver has run properly, you will see a dialog box that asks you to specify the
supplementary reports that you would like Solver to produce. Always select the Answer and
Sensitivity reports from this list. They will automatically appear on separate spreadsheet tabs.
Make sure you always print out these reports with your answers, as these reports are very
important to our study of sensitivity analysis in linear programming.



Clicking on OK at this point will cause the Answer and Sensitivity reports to appear as
additional pages in your worksheet. Your worksheet should now look like this.




66
Please look carefully at the bottom of the worksheet to see the tabs for the Answer and
Sensitivity reports. Your final printed output for the problem will consist of the basic worksheet,
the Answer report and the Sensitivity report. They should look like this:




67
Chapter 8 Linear Programming Applications

Chapter 8 presents specific applications of the linear programming techniques from chapter 7.
It does not present any new material in terms of the solution process or the interpretation of
results, but it does present a more challenging series of problems. These notes will briefly
review each type of problem we will encounter in this chapter.

8.2 Media Selection Problems

These are exceptions to the Max Profit, Max revenue, Min cost rule, as the objective is
generally to reach as large an audience as possible. In this case, money is a budgetary
constraint. This problem can also be set up to minimize media costs, while reaching an
acceptable minimum audience size.

Preview problem 8-6 fits this type and homework problem 8-11 is combined pricing strategy and
media problem. Problem 8-11 is a truly critical problem to test your real grasp of business issues
explored in this course. It is not difficult to set up, but you must think clearly when you review
the solution.

8.3 Production Mix Problems

These are the most like the problems in chapter 7, but the complexity is greater. The typical
problem seeks maximization of profit by finding the optimal production quantities of a set of
products. Homework problem 8-13 is our example. It will require you to calculate the profit per
unit of each product and you should be both careful and thorough in your calculation.

8.4 Labor Planning Problems

These problems minimize the staffing levels (number of employees) or staffing cost while
meeting all the work requirements given in the problem. Preview problem 8-3 and homework
problem 8-14 fit this type. 8-3 is quite straightforward, while 8-14 is a good test of your ability
to put the pieces together and formulate a realistic hiring strategy over a 5 month period.

8.5 - Portfolio Selection Problems

These problems always generated the most student interest, at least until the stock market
collapsed, because they involve maximizing return on investment and use exactly the same
techniques that are the foundation of most investment management decisions. In these
problems, we must find the optimal allocation of money across a series of potential
investments. Preview problem 8-2 and homework problem 8-24 are examples of this type.
These problems are usually easy to set up, although care should be taken in the formulating and
understanding of the constraints. These problems also work very well for learning to interpret
the sensitivity report.

8.6 - Shipping Problems

These problems represent the original application of linear programming; the logistics of getting
material from one place to another efficiently. The objective is generally to minimize cost (or
68
distance as a proxy for cost) and the nature of these problems leads to a large number of
decision variables. This can makes them tedious to solve in exactly the same Excel template
used for all other problems, and an alternative set up is provided for preview problems 8-10 and
8-23. These problems also illustrate that there are an infinite number of ways to set up a linear
programming problem and that our universal template is really only for ease of instruction
and not necessarily the best format for each problem.

8.8 Diet Problems

How do we feed people, or animals, and make sure they receive proper nutrition, while
minimizing cost? This is the point of the diet problem and it is a model in wide use in many
institutions. The setup of these problems is very simple, although a little time consuming.
Homework problem 8-12 is a classic diet problem and the solutions to these problems also work
well for studying the interpretation of the sensitivity report.


Summary

Linear programming is the basis for many business decision models. Additionally, the
logic and discipline these problems require is widely applicable to the analysis of
business situations. The purpose of these two chapters is to expose you to the process,
build some solid Excel solution development skills and demonstrate just a few of the
specific applications of this technique.

This is the one topic we cover, above all others, where students should be able to see
relevance and applicability in both their future careers and in their current daily lives. At
its core, almost every decision is a constrained optimization decision in which we try to
do the best we can while satisfying a series of internal (self-imposed) or external
constraints.


69
Chapter 15 Simulation Modeling

Simulation models are used in many business situations to study the impact of various
decisions on company performance. Simulation models can be as simple as the
example we will cover in this guide, or a complex and detailed as any real-world
situation.

Whenever we are interested in comparing the effect of decision alternatives, asking
what-if questions or examining the relationships between individual variables or factors
in a business situation or process, simulation can be a useful approach.

Simulation also permits us to model many periods of time instantly and allows us to
consider the impact of our decisions over time without disrupting the actual business
process currently in place.

The goal of simulation modeling is to help mangers make better decisions by providing
them with practical information about the likely impact of and results obtained from their
decisions BEFORE the decision is actually implemented in the organization.

Our study of simulation will focus first on the basic concepts of building a simulation
model manually and in Excel. We will then examine the application of simulation models
in three business processes:
- Inventory management, stocking levels and reordering policy
- Queuing, or waiting line processes
- Repair and maintenance planning

In addition, we will consider the important differences between two general types of
simulation models:

Fixed time increment models use data that reflects the total demand on the system and
the number of services performed within a fixed period of time such as an hour, day,
week or month. The example that follows, the Simkins inventory problem and the
queuing problem for the Port of New Orleans all represent fixed time increment models.

Next event increment models require that we update the model each time demand
occurs. In these models, demand happens at variable intervals of time and we measure
the time between each occurrence instead of the total number of occurrences within a
fixed period of time. The Three Hills Power Company example is a next event increment
model.

While each type of model does use a similar framework for the basic construction of the
model, the logic required within the model is quite different and it is important to identify
which type of model would work best for each problem or business situation you face.
70
Typical Simulation Problems

Basic
- one element of a business process is subject to variability
- a probability distribution of the variable is developed from observation
- a Monte Carlo simulation is constructed to model the problem
- decision alternatives are evaluated within the simulation

Inventory Analysis
- Variables to be simulated
o Customer demand
o Delivery lead time
- Decisions
o re-order point when to reorder
o re-order quantity how many to reorder
- Monetary implications
o cost of storing inventory
o cost of placing orders
o opportunity cost of a stock out

Queuing (Waiting Lines)
- Variables to be simulated
o Arrivals into the system (demand)
o Service time
- Decisions
o Service / staffing level
o Acceptable customer waiting time
- Monetary implications
o Cost of additional staffing
o Cost of customer waiting time (lost goodwill, etc.)

Repair and Maintenance Policy
- Variable to be simulated
o Breakdown frequency
o Repair time
- Decisions
o Staffing level of repair crew
o Acceptable downtime (machine out of service)
- Monetary implications
o Cost of additional repair staffing
o Cost of downtime (lost production, profits, etc.)
71
A quick guide to building a basic Monte Carlo Simulation

Billy Matthews has a part time job selling copies of the Sunday newspaper out in front of
the local Dunkin Donuts. Once a quarter, he must place an order for the number of
newspapers he wants to buy from the publisher for the next 13 weeks. After he places
the order, the quantity cannot change. Billy is locked into that quantity for all 13 weeks.
Billy pays $1.00 for each newspaper and sells them for $2.50. Any newspapers that are
unsold are waste, but there is no penalty for having a shortage. He also pays the owner
of the Dunkin Donuts a flat fee of $50 per week in order to use the space in front of the
store.

The challenge Billy faces is that demand for newspapers can vary greatly from week to
week depending on weather and other variables that are beyond his control. He has
been studying data analysis and decision modeling in his math class (Billy is in the 10
th

grade) and he knows that he must develop a model that permits him to set his order at
the right level to maximize profit. He has decided to construct a Monte Carlo simulation
for the next 13 weeks and then place his order according to the results of that simulation.

Step 1: Establishing a Probability Distribution

Billy needs some information about the demand for newspapers. Because he is
meticulous about his business, Billy has kept accurate records of the demand for
newspapers for the last 50 weeks and developed the following frequency distribution.



Because Billy is comfortable with probability and knows that a probability distribution is
the key to a Monte Carlo simulation, he converts the frequency distribution into a
probability distribution. And calculates the mean, or expected value of the probability
distribution.


= = = 5 . 109 )] ( * [ ) ( x P x x E


Demand Frequency
50 2
75 6
100 20
125 15
150 7
Weeks = 50
Demand Frequency Probability
50 2 0.04
75 6 0.12
100 20 0.40
125 15 0.30
150 7 0.14
Weeks = 50 1.00
72
Step 2: Building a Cumulative Probability Distribution for Each Variable

Understanding how his MS Excel prefers to handle probability distributions, Billy extends
his work to create a cumulative probability distribution for the data.







Step 3: Setting Random Number Intervals

Billys last step in preparing his probability distribution to be used in the simulation is to
set up the interval ranges for the random numbers and organize it in a format that he
knows MS Excel will like.





Demand Frequency Probability
Cumulative
Probabiltiy
50 2 0.04 0.04
75 6 0.12 0.16
100 20 0.40 0.56
125 15 0.30 0.86
150 7 0.14 1.00
Weeks = 50 1.00
Demand Frequency Probability
Cumulative
Probabiltiy
Lower
Random
Upper
Random Demand
50 2 0.04 0.04 1 4 50
75 6 0.12 0.16 5 16 75
100 20 0.40 0.56 17 56 100
125 15 0.30 0.86 57 86 125
150 7 0.14 1.00 87 100 150
Weeks = 50 1.00
73
Step 4: Generating Random Numbers

Setting aside the probability interval table above, Billy must now generate a random
number for each of the 13 weeks. He remembers the first time he discussed this
solution with his dad, who excitedly ran up to the attic and came back down flushed with
pride and holding a dusty copy of a mysterious book that seemed to consist entirely of
pages filled with numbers. Although Billy loves his dad, he couldnt help but laugh when
dad said, This is so much cooler that watching the Red Sox, we can spend the
afternoon with this book of random number tables and we can enjoy a few hours of good
father-son bonding over the wonders of these tables.

Billy managed to stop laughing only long enough to reply, Random number tables from
a book? What is this, the 70s? Dad, I appreciate the offer of help, but Ill let MS Excel
generate my random numbers for me. My simulation will be finished before the first
pitch and we can still watch the game together as long as you promise not to try to
calculate OPS in your head after every at bat.

Billy turns to Excel and sets up the following basic template for the simulation.



Week
Random
Number Demand
1
2
3
4
5
6
7
8
9
10
11
12
13
74
In the Random Number column, Billy wants Excel to generate random numbers.
Because he has set up his interval table using integers from 1 to 100, he wants Excel to
generate random numbers that match his intervals. Therefore the random numbers
must be between 1 and 100. Accordingly, Billy chooses the RANDBETWEEN function.




Note: If your version of Excel does not contain the RANDBETWEEN function, the
following should be used in its place and will work exactly the same way. Please
remember that getting this formula into excel EXACTLY as it is shown is critical to it
working correctly.
=ROUND(RAND(),2)*100+1

Finally, because he doesnt want the random numbers to change every time the model is
changed, he converts the group of cells in the Random Number column to fixed values
by using the following commands:

Copy (select entire column)
Edit Paste Special (select Values from the dialog box)

When all this is done Billy has the following random numbers in the simulation template.

Week
Random
Number Demand
1 74
2 46
3 90
4 15
5 35
6 54
7 86
8 35
9 89
10 33
11 29
12 3
13 53
75
Step 5: Simulating the Demand

Billy now needs to find the demand level associated with each of the random numbers
generated for each week. He could do this by hand. For example, Billy could look at the
first random number, 74, and compare it to the interval table. He would notice that 74
falls within the range defined by 56 to 86. Since that interval is associated with a
demand level of 125 newspapers, Billy would use that demand for the first week.

But Billy knows that this is ponderous and that Excel can do this automatically by using
the VLOOKUP function. Billy selects the first cell of the Demand column and clicks on fx
and then on VLOOKUP. The dialog box that appears requires 4 pieces of information.

1. Lookup_value = the cell with the Random Number for this demand level

2. Table_array = the interval table (F4 this to freeze it for all subsequent cells)

3. Col_index_num = 3 (the column number in the table_array containing the
demand (the 3
rd
column)

4. Range_lookup = TRUE







76
Once Billy gets this to work in the first Demand cell, he then copies it to all 13 weeks and
his template now looks as follows.




Now Billy knows that all the hard work is done. He has successfully simulated demand
for the next 13 weeks and he quickly expands his model to project his weekly and overall
profit based on a fixed order quantity.

Billy produces the following model. He initially chooses an order quantity of 110,
because that number is closest to the expected value of the probability distribution. He
realizes that the Order Quantity cell can be changed and starts to look for a better
solution through trial and error.

His dad wanders in after having unsuccessfully tried to teach the family dog, Euclid, how
to randomize the number of times she needs to be let out every night by using the
random number table book. Euclid, while seemingly uninterested in the detail of random
number tables, did seem quite interested in biting the book and shaking it until all the
pages flew out. Das looks at the screen and remarks, Im impressed, son. But why are
you fumbling around with trial and error to optimize your purchase quantity? Why dont
you just use the Solver and maximize the value of the total profit cell by changing the
order quantity cell?

Billy looks up at his dad and says, Solver? Have you been snooping on my work when
Im not looking? You can barely use the mouse and you know about the Solver?

Dad kindly replies, I guess your old man stills has some mad skills left in him.

Smiling to himself, Dad chose not to tell Billy that his old school skills had created a
world in which old Dad earned more money from compound interest in the 20 minutes it
would take Billy to solve the model than Billy will make all quarter from sitting out in the
cold selling newspapers.

But he also knew that letting Billy develop these skills on his own is the best way to put
Billy in exactly the same position when he becomes the old man.

Week
Random
Number Demand
1 74 125
2 46 100
3 90 150
4 15 75
5 35 100
6 54 100
7 86 125
8 35 100
9 89 150
10 33 100
11 29 100
12 3 50
13 53 100
77






Week
Random
Number Demand Units Sold
Gross
Revenue Total Cost
Net Profit
(Loss)
1 74 125 100 250.00 $ 150.00 $ 100.00 $
2 46 100 100 250.00 $ 150.00 $ 100.00 $
3 90 150 100 250.00 $ 150.00 $ 100.00 $
4 15 75 75 187.50 $ 150.00 $ 37.50 $
5 35 100 100 250.00 $ 150.00 $ 100.00 $
6 54 100 100 250.00 $ 150.00 $ 100.00 $
7 86 125 100 250.00 $ 150.00 $ 100.00 $
8 35 100 100 250.00 $ 150.00 $ 100.00 $
9 89 150 100 250.00 $ 150.00 $ 100.00 $
10 33 100 100 250.00 $ 150.00 $ 100.00 $
11 29 100 100 250.00 $ 150.00 $ 100.00 $
12 3 50 50 125.00 $ 150.00 $ (25.00) $
13 53 100 100 250.00 $ 150.00 $ 100.00 $
Average Demand = 105.77
Order Quantity 100 Total Profit 1,112.50 $
Sales Price per unit 2.50 $
Purchase cost per unit 1.00 $
Rent cost per week 50.00 $
78
Problem 15-27 (Dr. Greenberg) Annotated Solution

This document provides a step-by-step walk through of the problem. First, we will
develop a static solution using the random number tables provided in the text. Then, we
will examine the logic and formulas necessary to build a live simulation for the problem.

Step 1 - Analyzing the problem.
Dr. Greenberg needs to leave by 12:15 and would like to use simulation to help him
understand whether this is likely to happen. At first glance, it appears he will be fine.
His last patient, Hinkel, has an 11:45 appointment and the procedure Hinkel requires
should take 15 minutes. In theory, Hinkel should be done at 12:00, leaving Dr.
Greenberg 15 minutes before he must leave.

However, we are told that both the arrival times and service times can vary. Patients are
sometimes early or late for their appointments and the procedures can sometimes take
more or less time than planned. This variability creates a classic queuing problem in
which we must simulate both arrivals and service times in order to understand what may
happen during the morning.

In addition, we are told that Dr. Greenberg will be ready for work at 9:30, so an early
arrival by the first patient will not result in an earlier start time. Finally, Dr. Greenberg
takes patients according to the order of their scheduled exam. If patient 3 arrives before
patient 2, patient 2 is still seen first.

The development of the simulation begins with the development of probability interval
tables for arrival and service times. Based on the probabilities provided, we develop the
following tables:





The tables distribute the random numbers in accordance with the relative probability of
each arrival or service event. Note that we are including a second column for arrival
factor and service factor, respectively, in each table to accommodate the requirements
of the =VLOOKUP function.

Arrival
Factor P(x) C P(x) Lower Upper
Arrival
Factor
-20 0.20 0.20 1 20 -20
-10 0.10 0.30 21 30 -10
0 0.40 0.70 31 70 0
10 0.25 0.95 71 95 10
20 0.05 1.00 96 100 20
Service
Factor P(x) C P(x) Lower Upper
Service
Factor
-20% 0.15 0.15 1 15 -20%
0% 0.50 0.65 16 65 0%
20% 0.25 0.90 66 90 20%
40% 0.10 1.00 91 100 40%
79
The next step is to simulate the arrivals. This requires choosing a set of random
numbers, relating the random numbers back to an arrival event and then calculating the
simulated arrival time for each patient. For simplicity, we will use the first 8 random
numbers in the first column of the random number table 15.5 in your text.

The following table is produced:



Random Number: Chosen from table 15.5

Arrival Factor: Selected from the table above based on the random number.

Ex 1: random number 52 fits in the range 96 to 100, identified with an arrival
factor of +20 minutes. This patient is simulated to arrive 20 minutes late..

Ex 2: random number 98 fits in the range 16 to 65, identified with an arrival factor
of 0. This patient is simulated to arrive on time.

Scheduled Arrival Time: Given in the problem

Simulated Arrival Time: Scheduled arrival time plus or minus the simulated arrival
factor.
Ex 1: Erving is scheduled to arrive at 10:45, but based on the random number of
98, is simulated to be 20 minutes late. Erving's simulated arrival time is 11:05.

When this table is completed, we have a full set of simulated arrival times based on the
random numbers chosen.

Patient
Random
Number
Arrival
Factor
Scheduled
Arrival Time
Simulated
Arrival Time
Adams 52 0 9:30 9:30
Brown 37 0 9:45 9:45
Crawford 82 10 10:15 10:25
Dannon 69 0 10:30 10:30
Erving 98 20 10:45 11:05
Fink 96 20 11:15 11:35
Graham 33 0 11:30 11:30
Hinkel 50 0 11:45 11:45
80
The next step is to simulate the service times. This requires choosing a second set of
random numbers, relating the random numbers back to an arrival event and then
calculating the simulated arrival time for each patient. For simplicity, we will use the first
8 random numbers in the second column of the random number table 15.5 in your text.

The following table is produced:



Random Number: Chosen from table 15.5

Service Factor: Selected from the table above based on the random number.

Ex 1: random number 6 fits in the range 1 to 15, identified with an service factor
of -20%. This service is simulated to require 20% less time than planned.

Ex 2: random number 94 fits in the range 91 to 100, identified with an service
factor of +40%. This service is simulated to require 40% more time than
planned.

Scheduled Service Time: Given in the problem
Simulated Service Time: Scheduled arrival time plus or minus the simulated arrival
factor.
Ex 1: Erving's procedure is scheduled to require 30 minutes, but based on the
random number of 94, is simulated to require 40% more time than planned. 40%
of 30 minutes is an additional 12 minutes. Erving's simulated service time is 42
minutes.

When this table is completed, we have a full set of simulated service times based on the
random numbers chosen.

Patient
Random
Number
Service
Factor
Scheduled
Service Time
Simulated
Service Time
Adams 6 -20% 15 12
Brown 63 0% 20 20
Crawford 57 0% 15 15
Dannon 2 -20% 10 8
Erving 94 40% 30 42
Fink 52 0% 15 15
Graham 69 20% 20 24
Hinkel 33 0% 15 15
81
The last step in the static simulation is to integrate the simulated arrivals and services
into a simple business model that tells Dr. Greenberg what will happen to his day based
on this SINGLE SET OF RANDOM NUMBERS.

The following table is produced:



Simulated Arrival Time: From Arrival simulation table above.

Simulated Start Time: Developed from the following logic:

The first patient starts at the later of 9:30 or when they arrive. In this case,
Adams is simulated to be on time, so the service begins at 9:30.
All other patients begin at the later of when they arrive or when Dr. Greenberg
completes the prior patient's service.

Ex1: Brown arrive at 9:45. Dr. Greenberg finishes Adams at 9:42. Dr.
Greenberg cannot begin working on Brown until Brown arrives.
Accordingly, Dr. Greenberg is idle for 3 minutes before beginning to work
on Brown.

Ex2: Fink arrives at 11:35, but Dr. Greenberg is unable to complete
Erving until 11:47. Fink must wait for 12 minutes before being served.

Simulated Service Time: From Service simulation table above.

Simulated Completion Time: Calculated by adding Simulated Service Time to Simulated
Start Time.

Ex 1: Dannon's service is simulated to begin at 10:40 and is simulated to require
8 minutes. Dannon is done, and Dr. Greenberg is ready for his next patient, at
10:48.

Patient
Simulated
Arrival Time
Simulated
Start Time
Simulated
Service
Time
Simulated
Completion
Time
Adams 9:30 9:30 12 9:42
Brown 9:45 9:45 20 10:05
Crawford 10:25 10:25 15 10:40
Dannon 10:30 10:40 8 10:48
Erving 11:05 11:05 42 11:47
Fink 11:35 11:47 15 12:02
Graham 11:30 12:02 24 12:26
Hinkel 11:45 12:26 15 12:41
82
Interpretation: Based on this simulation, Dr. Greenberg will not be done until 12:41 and
will miss his flight unless he cancels a his last two patients.

Strengths: A simulation is a more realistic version of what happens every day. Patients
don't always arrive on time and procedures don't always take exactly the planned
amount of time. Also, Dr. Greenberg can simulate his day before it happens and in
compressed time.

Weaknesses: This is only one version of what might happen, based on a single
experiment with a sample size of 8 patients. While it is very unlikely that Dr.
Greenberg's day will unfold exactly as planned, it is also unlikely that his day will match
this simulation.

Improvement: Since we cannot increase the sample size beyond the scheduled 8
patients, we can improve the usefulness of the model by building it so that it can be
easily repeated with different random numbers, allowing patterns to emerge and letting
Dr. Greenberg see the likelihood that he will be able to leave on time. This leads us to
the development of a "live" simulation in which everything changes as we change
random numbers.



83
Development of a live simulation

NOTE: Excel does not handle "time" very well. When we use Excel for this
purpose, it is easier to use "fractional hours" to express time. You'll see this
throughout the live simulation. Here's a quick table explaining the conversion
from minute to fractional hours.



Part 1 - Probability Interval tables

We'll cover the Excel detail of the arrival table. The service table works exactly the same
way. As seen before, the table looks as follows:



Arrival Factor: Given in the problem

P(x): The probability of each arrival event. This is given in the problem.

C P(x): The cumulative probability that X <= x.
Cell C5 = B5
Cell C6 = C5 + B6, C7-C9 follow this format

Lower: The lower bound of the random number interval for that arrival event.
Cell D5 = 1
Cell D6 = 1 + E5, D7 - D9 follow this format

Upper: The upper bound of the random number interval for that arrival event.

Cell E5 = C5 * 100 (all cells in column E follow this formula; E6 = C6 * 100, etc.)

Arrival Factor (for =VLOOKUP): F5 = A5, F6 = A6, etc.

Minutes Fractional Hour Time Fractional
50 0.833 9:00 9.000
45 0.750 9:05 9.083
30 0.500 9:10 9.167
25 0.417 9:15 9.250
20 0.333 9:20 9.333
15 0.250 9:25 9.417
10 0.167 9:30 9.500
5 0.083 9:45 9.750
84
Part 2 - Simulation table for arrivals

After setting up the random number interval table, we can simulate arrivals and produce
the following:


Random Number: Cells B12 through B19 all use =RANDBETWEEN(1,100). When you
use this function the random numbers will change every time anything is done to the
spreadsheet. While confusing at first, this is what we want to happen, each change
creates a new simulation.

Arrival Factor: The arrival factors change automatically with the changes in random
numbers when the =VLOOKUP function is used to "lookup" the random number in the
random number interval table (see prior page) and the "deliver" the correct arrival factor
back to this cell.

Example: Cell C15 contains =VLOOKUP(B15,$D$5:$F$9,3)



Scheduled Arrival Time: Given in the problem, entered as fractional hours.

Simulated Arrival Time: = Scheduled Arrival Time + Arrival Factor. Cell E17 = C17 +
D17

Watch this table as you hit "F9" and you can see that the random numbers, arrival
factor and simulated arrival time all change.
85
Part 3 - Simulation table for service

The construction of the service simulation table is almost identical to the arrival
simulation table. Again, it relies upon the random number interval table for service.

Here are the tables:


Random Number: Cells I12 through I19 all use =RANDBETWEEN(1,100). When you
use this function the random numbers will change every time anything is done to the
spreadsheet. While confusing at first, this is what we want to happen, each change
creates a new simulation.

Service Factor: The arrival factors change automatically with the changes in random
numbers when the =VLOOKUP function is used to "lookup" the random number in the
random number interval table (see prior page) and the "deliver" the correct arrival factor
back to this cell.

Example: Cell J15 contains =VLOOKUP(I15,$K$5:$M$8,3). The easiest way to
do this is to use the dialog box for this function as shown on the prior page.

Scheduled Service Time: Given in the problem, entered as fractional hours.

Simulated Service Time: = Scheduled Service Time + (Service Factor * Scheduled
Service Time). Cell L13 =K13+(J13*K13)

Watch this table as you hit "F9" and you can see that the random numbers, arrival
factor and simulated arrival time all change.

86
Part 4 - The business model
l


Simulated Arrival Time: From the arrival simulation table Cell B22 = E12

Simulated Start Time: The logic is exactly the same as in the static model.
The first patient starts at the later of 9:30 or when they arrive. In this case,
Adams is simulated to be on time, so their service begins at 9:30. Cell C22
=MAX(B22, 9.5)
All other patients begin at the later of when they arrive or when Dr. Greenberg
completes the prior patient's service. Cell B23 = MAX(B23, E22)

Simulated Service Time: From the service simulation table Cell D22 = L12

Simulated Completion Time: = Simulated Start Time + Simulated Service Time. Cell
E22 = C22 + D22

Will he be late? Cell E31 uses an IF function to indicate whether Dr. Greenberg will
finish on time or be late. The formula in cell E31 is =IF(E29>12.25, "late", "on time"),
but it is easier to use the dialog box for the function as shown:




87
Conclusion

We now have a fully live simulation of Dr. Greenberg's day.

Hit "F9" 20 times and keep track of the number of time Dr. Greenberg will be on time
versus late. It will vary a bit for each experiment, but my results indicate that the model
predicts he will be late about 70% to 80% of the time.

If Dr. Greenberg now understands that there is a 70% to 80% chance that he will be late,
how could he use this information to make sure he is on time for his flight AND respectful
of his patient's time?

This document walks through a single queuing problem, but most of the work is
applicable to all of the problems we have studied.
There will always be at least 1 and usually 2 random number interval tables
needed.
Anything with a random number interval table will need to be simulated via a
Monte Carlo / random number process.
The business model will vary for each type of problem and may even vary for different
problems of the same type (2 different queuing problems) but the development of the
business model is the part that needs YOUR sharp thinking and clear understanding of
the business being studied.


88
Activating the add-ins in Excel 2007.


The course requires the use of 2 Excel add-ins. Both are provided as part of every copy
of Excel. To activate them, please do the following.
1. Select the big button and then click on EXCEL OPTIONS.

2. Select Add-Ins.


89
3. Verify that Excel Add-in is in the Manage box and select Go.

4. Left click check marks in the boxes for Analysis ToolPak and Solver Add-in. Click on OK.


5. Verify that the Add-Ins are installed by clicking on Data.

Anda mungkin juga menyukai