Anda di halaman 1dari 18

1.

Inventory Model

Inventory models consist of several types all designed to help the engineer manager
make decisions regarding inventory. They are as follows:

a. Economic order quantity model – This one is used to calculate the number of
items that should be ordered at one time to minimize the total yearly cost of
placing orders and carrying the items in inventory.
b. Production order quantity model - this is an economic order quantiy technique
applied to production orders.
c. Back order inventory model - this is an inventory model used for planned
shortages.
d. Quantity discount model — an inventory model used to minimize the total cost
when quantity discounts are offered by suppliers.

Example: An auto parts supplier sells Hardy-brand batteries to car dealers and auto
mechanics. The annual demand is approximately 1,200 batteries. The supplier pays $28
for each battery and estimates that the annual holding cost is 30 percent of the battery’s
value. It costs approximately $20 to place an order (managerial and clerical costs). The
supplier currently orders 100 batteries per month.

Solution: We are given the following information:annual demand:

D = 1200 batteries per year item cost:

c = $28 per battery holding cost:

H = ic = 0.30(28) = $8.40 per battery per year order cost:

S = $20 per order current order quantity:

Q = 100 batteries

2. Queuing Theory

The queuing theory is one that describes how to determine the number of service units
that will minimize both customer waiting time and cost of service.

The queuing theory is applicable to companies where waiting lines are a common
situation. Examples are cars waiting for service at a car service center, ships and barges
waiting at the harbor for loading and unloading by dockworkers, programs to be run in a
computer system that processes jobs, etc.
Example:

3. Network Models

These are models where large complex tasks are broken into smaller segments that can
be managed independently.

The two most prominent network models are:

a. The Program Evaluation Review Technique(PERT) - a technique which enables


engineer managers to schedule, monitor, and control large and complex projects by
employing three time estimates for each activity.

b. The Critical Path Method (CPM) - this is a network technique using only one time
factor per activity that enables engineer managers to schedule, monitor, and control large
and complex projects.
Example: The criterion of this method is to find the shortest distance between two nodes
with minimal cost.

Answer: The shortest path from a to i is a → c →e →g → i Distance = 7 + 6 + 10 + 6 = 29


units
Forecasting

Forecasting involves the generation of a number, set of numbers, or scenario that


corresponds to a future occurrence. It is absolutely essential to short-range and long-range
planning. By definition, a forecast is based on past data, as opposed to a prediction, which is
more subjective and based on instinct, gut feel, or guess. For example, the evening news
gives the weather "forecast" not the weather "prediction." Regardless, the terms forecast and
predictions are often used inter-changeably. For example, definitions of regression—a
technique sometimes used in forecasting—generally state that its purpose is to explain or
"predict."

Forecasting is based on a number of assumptions:

1. The past will repeat itself. In other words, what has happened in the past will happen
again in the future.

2. As the forecast horizon shortens, forecast accuracy increases. For instance, a


forecast for tomorrow will be more accurate than a forecast for next month; a forecast
for next month will be more accurate than a forecast for next year; and a forecast for
next year will be more accurate than a forecast for ten years in the future.

3. Forecasting in the aggregate is more accurate than forecasting individual items. This
means that a company will be able to forecast total demand over its entire spectrum
of products more accurately than it will be able to forecast individual stock-keeping
units (SKUs). For example, General Motors can more accurately forecast the total
number of cars needed for next year than the total number of white Chevrolet
Impalas with a certain option package.

4. Forecasts are seldom accurate. Furthermore, forecasts are almost never totally
accurate. While some are very close, few are "right on the money." Therefore, it is
wise to offer a forecast "range." If one were to forecast a demand of 100,000 units for
the next month, it is extremely unlikely that demand would equal 100,000 exactly.
However, a forecast of 90,000 to 110,000 would provide a much larger target for
planning.

William J. Stevenson lists a number of characteristics that are common to a good forecast:

 Accurate — some degree of accuracy should be determined and stated so that


comparison can be made to alternative forecasts.

 Reliable — the forecast method should consistently provide a good forecast if the
user is to establish some degree of confidence.

 Timely — a certain amount of time is needed to respond to the forecast so the


forecasting horizon must allow for the time necessary to make changes.

 Easy to use and understand — users of the forecast must be confident and
comfortable working with it.
 Cost-effective — the cost of making the forecast should not outweigh the benefits
obtained from the forecast.

Forecasting techniques range from the simple to the extremely complex. These techniques
are usually classified as being qualitative or quantitative.

QUALITATIVE TECHNIQUES

Qualitative forecasting techniques are generally more subjective than their quantitative
counterparts. Qualitative techniques are more useful in the earlier stages of the product life
cycle, when less past data exists for use in quantitative methods. Qualitative methods
include the Delphi technique, Nominal Group Technique (NGT), sales force opinions,
executive opinions, and market research.

THE DELPHI TECHNIQUE

The Delphi technique uses a panel of experts to produce a forecast. Each expert is
asked to provide a forecast specific to the need at hand. After the initial forecasts are made,
each expert reads what every other expert wrote and is, of course, influenced by their views.
A subsequent forecast is then made by each expert. Each expert then reads again what
every other expert wrote and is again influenced by the perceptions of the others. This
process repeats itself until each expert nears agreement on the needed scenario or
numbers.

NOMINAL GROUP TECHNIQUE

Nominal Group Technique is similar to the Delphi technique in that it utilizes a group
of participants, usually experts. After the participants respond to forecast-related questions,
they rank their responses in order of perceived relative importance. Then the rankings are
collected and aggregated. Eventually, the group should reach a consensus regarding the
priorities of the ranked issues.

SALES FORCE OPINIONS

The sales staff is often a good source of information regarding future demand. The
sales manager may ask for input from each sales-person and aggregate their responses into
a sales force composite forecast. Caution should be exercised when using this technique as
the members of the sales force may not be able to distinguish between what customers say
and what they actually do. Also, if the forecasts will be used to establish sales quotas, the
sales force may be tempted to provide lower estimates.

EXECUTIVE OPINIONS

Sometimes upper-levels managers meet and develop forecasts based on their


knowledge of their areas of responsibility. This is sometimes referred to as a jury of
executive opinion.

MARKET RESEARCH.
In market research, consumer surveys are used to establish potential demand. Such
marketing research usually involves constructing a questionnaire that solicits personal,
demographic, economic, and marketing information. On occasion, market researchers
collect such information in person at retail outlets and malls, where the consumer can
experience—taste, feel, smell, and see a particular product. The researcher must be careful
that the sample of people surveyed is representative of the desired consumer target.

QUANTITATIVE TECHNIQUES

Quantitative forecasting techniques are generally more objective than their qualitative
counterparts. Quantitative forecasts can be time-series forecasts (i.e., a projection of the
past into the future) or forecasts based on associative models (i.e., based on one or more
explanatory variables). Time-series data may have underlying behaviors that need to be
identified by the forecaster. In addition, the forecast may need to identify the causes of the
behavior. Some of these behaviors may be patterns or simply random variations. Among the
patterns are:

 Trends, which are long-term movements (up or down) in the data.

 Seasonality, which produces short-term variations that are usually related to the time
of year, month, or even a particular day, as witnessed by retail sales at Christmas or
the spikes in banking activity on the first of the month and on Fridays.

 Cycles, which are wavelike variations lasting more than a year that are usually tied to
economic or political conditions.

 Irregular variations that do not reflect typical behavior, such as a period of extreme
weather or a union strike.

 Random variations, which encompass all non-typical behaviors not accounted for by
the other classifications.

Example: An example of naïve forecasting is presented in Table 1.


Another simple technique is the use of averaging. To make a forecast using
averaging, one simply takes the average of some number of periods of past data by
summing each period and dividing the result by the number of periods. This technique has
been found to be very effective for short-range forecasting.

Variations of averaging include the moving average, the weighted average, and the
weighted moving average. A moving average takes a predetermined number of periods,
sums their actual demand, and divides by the number of periods to reach a forecast. For
each subsequent period, the oldest period of data drops off and the latest period is added.
Assuming a three-month moving average and using the data from Table 1, one would simply
add 45 (January), 60 (February), and 72 (March) and divide by three to arrive at a forecast
for April:
45 + 60 + 72 = 177 ÷ 3 = 59

5. Regression Analysis

Regression analysis is a quantitative research method which is used when the study
involves modelling and analysing several variables, where the relationship includes a
dependent variable and one or more independent variables. In simple terms, regression
analysis is a quantitative method used to test the nature of relationships between a
dependent variable and one or more independent variables.

The basic form of regression models includes unknown parameters (β), independent
variables (X), and the dependent variable (Y).

Regression model, basically, specifies the relation of dependent variable (Y) to a function
combination of independent variables (X) and unknown parameters (β)

Y ≈ f (X, β)

Regression equation can be used to predict the values of ‘y’, if the value of ‘x’ is given,
and both ‘y’ and ‘x’ are the two sets of measures of a sample size of ‘n’. The formulae for
regression equation would be

𝑦 ∗ = 𝑎 + 𝑏𝑥

Where,

Do not be intimidated by visual complexity of correlation and regression formulae above.


You don’t have to apply the formula manually, and correlation and regression analyses can
be run with the application of popular analytical software such as Microsoft Excel, Microsoft
Access, SPSS and others.

Linear regression analysis is based on the following set of assumptions:

a. Assumption of linearity. There is a linear relationship between dependent and


independent variables.

b. Assumption of homoscedasticity. Data values for dependent and independent


variables have equal variances.

c. Assumption of absence of collinearity or multicollinearity. There is no


correlation between two or more independent variables.

d. Assumption of normal distribution. The data for the independent variables and
dependent variable are normally distributed

Example: Determine the regression equation by using the regression slope coefficient and intercept
value as shown in the regression table given below.

X Values Y Values

55 52

60 54

65 56

70 58

80 62

For the given data set of data, solve the regression slope and intercept values.

Solution:

Let us count the number of values.

N=5

Determine the values for XY, X2


X Value Y Value X*Y X*X

55 52 2860 3025

60 54 3240 3600

65 56 3640 4225

70 58 4060 4900

80 62 4960 6400

Determine the following values ∑X∑X, ∑Y∑Y, ∑XY∑XY, ∑X2∑X2.

∑X=330∑X=330
∑Y=282∑Y=282
∑XY=18760∑XY=18760
∑X2=22150∑X2=22150

Substitute values in the slope formula

Slope (b) = N∑XY−(∑X)(∑Y)N∑X2−(∑X)2N∑XY−(∑X)(∑Y)N∑X2−(∑X)2

= (5)×(18760)−(330)×(282)(5)×(22150)−(330)2(5)×(18760)−(330)×(282)(5)×(22150)−(330)2
b =0.4

Substitute the values in the intercept formula given.

Intercept (a) = ∑Y−b(∑X)N∑Y−b(∑X)N

= 282−0.4(330)5282−0.4(330)5
a=30a=30

Substitute the Regression coefficient value and intercept value in the regression equation
Regression Equation(y) = a + bx

= 30 + 0.4x

6. Simulation

Simulation is a flexible methodology we can use to analyze the behavior of a present or


proposed business activity, new product, manufacturing line or plant expansion, and so on
(analysts call this the 'system' under study). By performing simulations and analyzing the
results, we can gain an understanding of how a present system operates, and what would
happen if we changed it - or we can estimate how a proposed new system would
behave. Often - but not always - a simulation deals with uncertainty, in the system itself, or
in the world around it.

Simulation Applications

Simulation is one of the most widely used quantitative methods -- because it is so flexible
and can yield so many useful results. Here's just a sample of the applications where
simulation is used:

Choosing drilling projects for oil and natural gas

• Evaluating environmental impacts of a new highway or industrial plant

• Setting stock levels to meet fluctuating demand at retail stores

• Forecasting sales and production requirements for a new drug

• Planning aircraft sorties and ship movements in the military

• Planning for retirement, given expenses and investment performance

• Deciding on reservations and overbooking policies for an airline

• Selecting projects with uncertain payoffs in capital budgeting

Simulation Models

In a simulation, we perform experiments on a model of the real system, rather than the
real system itself. We do this because it is faster, cheaper, or safer to perform experiments
on the model. While simulations can be performed using physical models -- such as a scale
model of an airplane -- our focus here is on simulations carried out on a computer.

Computer simulations use a mathematical model of the real system. In such a model we
use variables to represent key numerical measures of the inputs and outputs of the system,
and we use formulas, programming statements, or other means to express mathematical
relationships between the inputs and outputs. When the simulation deals with uncertainty,
the model will include uncertain variables -- whose values are not under our control -- as well
as decision variables or parameters that we can control. The uncertain variables are
represented by random number generators that return sample values from a representative
distribution of possible values for each uncertain element in each experimental trial or
replication of the model. A simulation run includes many hundreds or thousands of trials.

Our simulation model -- often called a risk model -- will calculate the impact of the
uncertain variables and the decisions we make on outcomes that we care about, such as
profit and loss, investment returns, environmental consequences, and the like. As part of
our model design, we must choose how numerical values for the uncertain variables will be
sampled on each trial.
Simulation Methods

Complex manufacturing and logistics systems often call for discrete event simulation,
where there are "flows" of materials or parts, people, etc. through the system, and many
steps or stages with complex interrelationships. Special simulation modeling languages are
often used for these applications.

But a great many situations -- including almost all of the examples above -- have been
successfully handled with simulation models created in a spreadsheet using Microsoft Excel.
This minimizes the learning curve, since you can apply your spreadsheet skills to create the
model. Simple steps or stages, such as inventory levels in different periods, are easy to
represent in columns of a spreadsheet model. You can solve a wide range of problems with
Monte Carlo simulation of models created in Excel, or in a programming language such as
Visual Basic, C++ or C#.

Running a simulation generates a great deal of statistical data, that must be analyzed
with appropriate tools. Professional simulation software, such as Frontline Systems' Risk
Solver, allows you to easily create charts and graphs, a wide range of statistics and risk
measures, perform sensitivity analysis and parameterized simulations, and use advanced
methods for simulation optimization.

Monte Carlo Simulation

Monte Carlo simulation -- named after the city in Monaco famed for its casinos and
games of chance -- is a powerful method for studying the behavior of a system, as
expressed in a mathematical model on a computer. As the name implies, Monte Carlo
methods rely on random sampling of values for uncertain variables, that are "plugged into"
the simulation model and used to calculate outcomes of interest. With the aid of software,
we can obtain statistics and view charts and graphs of the results. To learn more, consult
our Monte Carlo simulation tutorial.

Monte Carlo simulation is especially helpful when there are several different sources of
uncertainty that interact to produce an outcome. For example, if we're dealing with uncertain
market demand, competitors' pricing, and variable production and raw materials costs at the
same time, it can be very difficult to estimate the impacts of these factors -- in combination --
on Net Profit. Monte Carlo simulation can quickly analyze thousands of 'what-if' scenarios,
often yielding surprising insights into what can go right, what can go wrong, and what we can
do about it.

Example: The Lajwaab Bakery Shop keeps stock of a popular brand of cake. Previous
experience indicates the daily demand as given below:
Consider the following sequence of random numbers:

21, 27, 47, 54, 60, 39, 43, 91, 25, 20

Using this sequence, simulate the demand for the next 10 days. Find out the stock situation,
if the owner of the bakery shop decides to make 30 cakes every day. Also estimate the daily
average demand for the cakes on the basis of simulated data.

Solution.

Using the daily demand distribution, we obtain a probability distribution as shown in the
following table.

Table 1

At the start of simulation, the first random number 21 generates a demand of 25


cakes as shown in table 2. The demand is determined from the cumulative
probability values in table 1. At the end of first day, the closing quantity is 5 (30-25)
cakes.

Similarly, we can calculate the next demand for others.


Table 2

Total demand = 320

Average demand = Total demand/no. of days

The daily average demand for the cakes = 320/10 = 32 cakes.

7. Linear Programming

Mathematical technique used in computer modeling (simulation) to find the best possible
solution in allocating limited resources (energy, machines, materials, money, personnel,
space, time, etc.) to achieve maximum profit or minimum cost. However, it is applicable only
where all relationships are linear (see linear relationship), and can accommodate only a
limited class of cost functions. For problems involving more complex cost functions, another
technique called 'mixed integer modeling' is employed. Developed by the Russian economist
Leonid Kantorovich (1912-86) and the US economist C. Koopmans (1910-86), on the basis
of the work of the Russian mathematician Andrei Nikolaevich Kolmogorov (1903-87).

Example: A calculator company produces a scientific calculator and a graphing calculator.


Long-term projections indicate an expected demand of at least 100 scientific
and 80 graphing calculators each day. Because of limitations on production capacity, no
more than 200 scientific and 170graphing calculators can be made daily. To satisfy a
shipping contract, a total of at least 200 calculators much be shipped each day.
I f each scientific calculator sold results in a $2 loss, but each graphing calculator
produces a $5 profit, how many of each type should be made daily to maximize net profits?
The question asks for the optimal number of calculators, so my variables will stand
for that:
x: number of scientific calculators produced
y: number of graphing calculators produced

Since they can't produce negative numbers of calculators, I have the two
constraints, x> 0 and y > 0. But in this case, I can ignore these constraints, because I
already have that x > 100 and y > 80. The exercise also gives
maximums: x < 200 and y < 170. The minimum shipping requirement gives me x + y > 200;
in other words, y > –x + 200. The profit relation will be my optimization equation: P = –2x +
5y. So the entire system is:

P = –2x + 5y, subject to:

100 < x < 200


80 < y < 170
y > –x + 20

The feasibility region graphs as:


When you test the corner points at (100, 170), (200, 170), (200, 80), (120, 80), and (100,
100), you should obtain the maximum value of P = 650 at (x, y) = (100, 170). That is, the
solution is "100 scientific calculators and 170 graphing calculators".

8. Sampling Theory

Sampling theory is a study of relationships existing between a population and samples

drawn from the population. Sampling theory is applicable only to random samples. For this

purpose the population or a universe may be defined as an aggregate of items possessing a

common trait or traits. In other words, a universe is the complete group of items about which

knowledge is sought. The universe may be finite or infinite. Infinite universe is one which has

a definite and certain number of items, but when the number of items is uncertain and

infinite, the universe is said to be an infinite universe. Similarly, the universe may be

hypothetical or existent. In the former case the universe in fact does not exist and we can

only imagin the items constituting it. Tossing of a coin or throwing a dice are examples of

hypothetical universe. Existent universe is a universe of concrete objects i.e., the universe

where the items constituting it really exist. On the other hand, the term sample refers to that

part of the universe which is selected for the purpose of investigation. The theory of

sampling studies the relationships that exist between the universe and the sample or

samples drawn from it.

The main problem of sampling theory is the problem of relationship between a parameter

and a statistic. The theory of sampling is concerned with estimating the properties of the

population from those of the sample and also with gauging the precision of the estimate.

This sort of movement from particular (sample) towards general (universe) is what is known

as statistical induction or statistical inference. In more clear terms “from the sample we
attempt to draw inference concerning the universe. In order to be able to follow this inductive

method, we first follow a deductive argument which is that we imagine a population or

universe (finite or infinite) and investigate the behaviour of the samples drawn from this

universe applying the laws of probability.” The methodology dealing with all this is known as

sampling theory.
Example: An auto analyst is conducting a satisfaction survey, sampling from a list of 10,000
new car buyers. The list includes 2,500 Ford buyers, 2,500 GM buyers, 2,500 Honda buyers,
and 2,500 Toyota buyers. The analyst selects a sample of 400 car buyers, by randomly
sampling 100 buyers of each brand.

Is this an example of a simple random sample?

(A) Yes, because each buyer in the sample was randomly sampled.
(B) Yes, because each buyer in the sample had an equal chance of being sampled.
(C) Yes, because car buyers of every brand were equally represented in the sample.
(D) No, because every possible 400-buyer sample did not have an equal chance of being
chosen.
(E) No, because the population consisted of purchasers of four different brands of car.

Solution

The correct answer is (D). A simple random sample requires that


every sample of size n (in this problem, n is equal to 400) has an equal chance of
being selected. In this problem, there was a 100% chance that the sample would
include 100 purchasers of each brand of car. There was 0% chance that the sample
would include, for example, 99 Ford buyers, 101 Honda buyers, 100 Toyota buyers,
and 100 GM buyers. Thus, all possible samples of size 400 did not have an equal
chance of being selected; so this cannot be a simple random sample.

The fact that each buyer in the sample was randomly sampled is a necessary
condition for a simple random sample, but it is not sufficient. Similarly, the fact that
each buyer in the sample had an equal chance of being selected is characteristic of a
simple random sample, but it is not sufficient. The sampling method in this problem
used random sampling and gave each buyer an equal chance of being selected; but
the sampling method was actually stratified random sampling.
The fact that car buyers of every brand were equally represented in the sample is
irrelevant to whether the sampling method was simple random sampling. Similarly,
the fact that population consisted of buyers of different car brands is irrelevant.

9. Statistical Decision Theory

Decision theory is an interdisciplinary approach to determine how decisions


are made given unknown variables and an uncertain decision environment
framework. Decision theory brings together psychology, statistics, philosophy and
mathematics to analyze the decision-making process. Decision theory is closely
related to game theory and is studied within the context of understanding the
activities and decisions underpinning activities such as auctions, evolution and
marketing.

BREAKING DOWN Decision Theory

There are three main areas of decision theory. Each studies a different type of
decision making. Descriptive decision theory examines how irrational beings make
decisions. Prescriptive decision theory tries to provide guidelines for agents to make
the best possible decisions given an uncertain decision-making framework.
Normative decision theory provides guidance for making decisions given a set of
values.

Example: A common example of decision theory stems from the prisoner's


dilemma in which two individuals are faced with an uncertain decision where the
outcome is not only based on their personal decision, but also on that of the other
individual. Since both parties do not know what actions the other person will take,
this results in an uncertain decision framework. While mathematics and statistical
models determine what the optimal decision should be, psychology and philosophy
introduce factors of human behaviors to suggest the most likely outcome.

Anda mungkin juga menyukai