Inventory Model
Inventory models consist of several types all designed to help the engineer manager
make decisions regarding inventory. They are as follows:
a. Economic order quantity model – This one is used to calculate the number of
items that should be ordered at one time to minimize the total yearly cost of
placing orders and carrying the items in inventory.
b. Production order quantity model - this is an economic order quantiy technique
applied to production orders.
c. Back order inventory model - this is an inventory model used for planned
shortages.
d. Quantity discount model — an inventory model used to minimize the total cost
when quantity discounts are offered by suppliers.
Example: An auto parts supplier sells Hardy-brand batteries to car dealers and auto
mechanics. The annual demand is approximately 1,200 batteries. The supplier pays $28
for each battery and estimates that the annual holding cost is 30 percent of the battery’s
value. It costs approximately $20 to place an order (managerial and clerical costs). The
supplier currently orders 100 batteries per month.
Q = 100 batteries
2. Queuing Theory
The queuing theory is one that describes how to determine the number of service units
that will minimize both customer waiting time and cost of service.
The queuing theory is applicable to companies where waiting lines are a common
situation. Examples are cars waiting for service at a car service center, ships and barges
waiting at the harbor for loading and unloading by dockworkers, programs to be run in a
computer system that processes jobs, etc.
Example:
3. Network Models
These are models where large complex tasks are broken into smaller segments that can
be managed independently.
b. The Critical Path Method (CPM) - this is a network technique using only one time
factor per activity that enables engineer managers to schedule, monitor, and control large
and complex projects.
Example: The criterion of this method is to find the shortest distance between two nodes
with minimal cost.
1. The past will repeat itself. In other words, what has happened in the past will happen
again in the future.
3. Forecasting in the aggregate is more accurate than forecasting individual items. This
means that a company will be able to forecast total demand over its entire spectrum
of products more accurately than it will be able to forecast individual stock-keeping
units (SKUs). For example, General Motors can more accurately forecast the total
number of cars needed for next year than the total number of white Chevrolet
Impalas with a certain option package.
4. Forecasts are seldom accurate. Furthermore, forecasts are almost never totally
accurate. While some are very close, few are "right on the money." Therefore, it is
wise to offer a forecast "range." If one were to forecast a demand of 100,000 units for
the next month, it is extremely unlikely that demand would equal 100,000 exactly.
However, a forecast of 90,000 to 110,000 would provide a much larger target for
planning.
William J. Stevenson lists a number of characteristics that are common to a good forecast:
Reliable — the forecast method should consistently provide a good forecast if the
user is to establish some degree of confidence.
Easy to use and understand — users of the forecast must be confident and
comfortable working with it.
Cost-effective — the cost of making the forecast should not outweigh the benefits
obtained from the forecast.
Forecasting techniques range from the simple to the extremely complex. These techniques
are usually classified as being qualitative or quantitative.
QUALITATIVE TECHNIQUES
Qualitative forecasting techniques are generally more subjective than their quantitative
counterparts. Qualitative techniques are more useful in the earlier stages of the product life
cycle, when less past data exists for use in quantitative methods. Qualitative methods
include the Delphi technique, Nominal Group Technique (NGT), sales force opinions,
executive opinions, and market research.
The Delphi technique uses a panel of experts to produce a forecast. Each expert is
asked to provide a forecast specific to the need at hand. After the initial forecasts are made,
each expert reads what every other expert wrote and is, of course, influenced by their views.
A subsequent forecast is then made by each expert. Each expert then reads again what
every other expert wrote and is again influenced by the perceptions of the others. This
process repeats itself until each expert nears agreement on the needed scenario or
numbers.
Nominal Group Technique is similar to the Delphi technique in that it utilizes a group
of participants, usually experts. After the participants respond to forecast-related questions,
they rank their responses in order of perceived relative importance. Then the rankings are
collected and aggregated. Eventually, the group should reach a consensus regarding the
priorities of the ranked issues.
The sales staff is often a good source of information regarding future demand. The
sales manager may ask for input from each sales-person and aggregate their responses into
a sales force composite forecast. Caution should be exercised when using this technique as
the members of the sales force may not be able to distinguish between what customers say
and what they actually do. Also, if the forecasts will be used to establish sales quotas, the
sales force may be tempted to provide lower estimates.
EXECUTIVE OPINIONS
MARKET RESEARCH.
In market research, consumer surveys are used to establish potential demand. Such
marketing research usually involves constructing a questionnaire that solicits personal,
demographic, economic, and marketing information. On occasion, market researchers
collect such information in person at retail outlets and malls, where the consumer can
experience—taste, feel, smell, and see a particular product. The researcher must be careful
that the sample of people surveyed is representative of the desired consumer target.
QUANTITATIVE TECHNIQUES
Quantitative forecasting techniques are generally more objective than their qualitative
counterparts. Quantitative forecasts can be time-series forecasts (i.e., a projection of the
past into the future) or forecasts based on associative models (i.e., based on one or more
explanatory variables). Time-series data may have underlying behaviors that need to be
identified by the forecaster. In addition, the forecast may need to identify the causes of the
behavior. Some of these behaviors may be patterns or simply random variations. Among the
patterns are:
Seasonality, which produces short-term variations that are usually related to the time
of year, month, or even a particular day, as witnessed by retail sales at Christmas or
the spikes in banking activity on the first of the month and on Fridays.
Cycles, which are wavelike variations lasting more than a year that are usually tied to
economic or political conditions.
Irregular variations that do not reflect typical behavior, such as a period of extreme
weather or a union strike.
Random variations, which encompass all non-typical behaviors not accounted for by
the other classifications.
Variations of averaging include the moving average, the weighted average, and the
weighted moving average. A moving average takes a predetermined number of periods,
sums their actual demand, and divides by the number of periods to reach a forecast. For
each subsequent period, the oldest period of data drops off and the latest period is added.
Assuming a three-month moving average and using the data from Table 1, one would simply
add 45 (January), 60 (February), and 72 (March) and divide by three to arrive at a forecast
for April:
45 + 60 + 72 = 177 ÷ 3 = 59
5. Regression Analysis
Regression analysis is a quantitative research method which is used when the study
involves modelling and analysing several variables, where the relationship includes a
dependent variable and one or more independent variables. In simple terms, regression
analysis is a quantitative method used to test the nature of relationships between a
dependent variable and one or more independent variables.
The basic form of regression models includes unknown parameters (β), independent
variables (X), and the dependent variable (Y).
Regression model, basically, specifies the relation of dependent variable (Y) to a function
combination of independent variables (X) and unknown parameters (β)
Y ≈ f (X, β)
Regression equation can be used to predict the values of ‘y’, if the value of ‘x’ is given,
and both ‘y’ and ‘x’ are the two sets of measures of a sample size of ‘n’. The formulae for
regression equation would be
𝑦 ∗ = 𝑎 + 𝑏𝑥
Where,
d. Assumption of normal distribution. The data for the independent variables and
dependent variable are normally distributed
Example: Determine the regression equation by using the regression slope coefficient and intercept
value as shown in the regression table given below.
X Values Y Values
55 52
60 54
65 56
70 58
80 62
For the given data set of data, solve the regression slope and intercept values.
Solution:
N=5
55 52 2860 3025
60 54 3240 3600
65 56 3640 4225
70 58 4060 4900
80 62 4960 6400
∑X=330∑X=330
∑Y=282∑Y=282
∑XY=18760∑XY=18760
∑X2=22150∑X2=22150
= (5)×(18760)−(330)×(282)(5)×(22150)−(330)2(5)×(18760)−(330)×(282)(5)×(22150)−(330)2
b =0.4
= 282−0.4(330)5282−0.4(330)5
a=30a=30
Substitute the Regression coefficient value and intercept value in the regression equation
Regression Equation(y) = a + bx
= 30 + 0.4x
6. Simulation
Simulation Applications
Simulation is one of the most widely used quantitative methods -- because it is so flexible
and can yield so many useful results. Here's just a sample of the applications where
simulation is used:
Simulation Models
In a simulation, we perform experiments on a model of the real system, rather than the
real system itself. We do this because it is faster, cheaper, or safer to perform experiments
on the model. While simulations can be performed using physical models -- such as a scale
model of an airplane -- our focus here is on simulations carried out on a computer.
Computer simulations use a mathematical model of the real system. In such a model we
use variables to represent key numerical measures of the inputs and outputs of the system,
and we use formulas, programming statements, or other means to express mathematical
relationships between the inputs and outputs. When the simulation deals with uncertainty,
the model will include uncertain variables -- whose values are not under our control -- as well
as decision variables or parameters that we can control. The uncertain variables are
represented by random number generators that return sample values from a representative
distribution of possible values for each uncertain element in each experimental trial or
replication of the model. A simulation run includes many hundreds or thousands of trials.
Our simulation model -- often called a risk model -- will calculate the impact of the
uncertain variables and the decisions we make on outcomes that we care about, such as
profit and loss, investment returns, environmental consequences, and the like. As part of
our model design, we must choose how numerical values for the uncertain variables will be
sampled on each trial.
Simulation Methods
Complex manufacturing and logistics systems often call for discrete event simulation,
where there are "flows" of materials or parts, people, etc. through the system, and many
steps or stages with complex interrelationships. Special simulation modeling languages are
often used for these applications.
But a great many situations -- including almost all of the examples above -- have been
successfully handled with simulation models created in a spreadsheet using Microsoft Excel.
This minimizes the learning curve, since you can apply your spreadsheet skills to create the
model. Simple steps or stages, such as inventory levels in different periods, are easy to
represent in columns of a spreadsheet model. You can solve a wide range of problems with
Monte Carlo simulation of models created in Excel, or in a programming language such as
Visual Basic, C++ or C#.
Running a simulation generates a great deal of statistical data, that must be analyzed
with appropriate tools. Professional simulation software, such as Frontline Systems' Risk
Solver, allows you to easily create charts and graphs, a wide range of statistics and risk
measures, perform sensitivity analysis and parameterized simulations, and use advanced
methods for simulation optimization.
Monte Carlo simulation -- named after the city in Monaco famed for its casinos and
games of chance -- is a powerful method for studying the behavior of a system, as
expressed in a mathematical model on a computer. As the name implies, Monte Carlo
methods rely on random sampling of values for uncertain variables, that are "plugged into"
the simulation model and used to calculate outcomes of interest. With the aid of software,
we can obtain statistics and view charts and graphs of the results. To learn more, consult
our Monte Carlo simulation tutorial.
Monte Carlo simulation is especially helpful when there are several different sources of
uncertainty that interact to produce an outcome. For example, if we're dealing with uncertain
market demand, competitors' pricing, and variable production and raw materials costs at the
same time, it can be very difficult to estimate the impacts of these factors -- in combination --
on Net Profit. Monte Carlo simulation can quickly analyze thousands of 'what-if' scenarios,
often yielding surprising insights into what can go right, what can go wrong, and what we can
do about it.
Example: The Lajwaab Bakery Shop keeps stock of a popular brand of cake. Previous
experience indicates the daily demand as given below:
Consider the following sequence of random numbers:
Using this sequence, simulate the demand for the next 10 days. Find out the stock situation,
if the owner of the bakery shop decides to make 30 cakes every day. Also estimate the daily
average demand for the cakes on the basis of simulated data.
Solution.
Using the daily demand distribution, we obtain a probability distribution as shown in the
following table.
Table 1
7. Linear Programming
Mathematical technique used in computer modeling (simulation) to find the best possible
solution in allocating limited resources (energy, machines, materials, money, personnel,
space, time, etc.) to achieve maximum profit or minimum cost. However, it is applicable only
where all relationships are linear (see linear relationship), and can accommodate only a
limited class of cost functions. For problems involving more complex cost functions, another
technique called 'mixed integer modeling' is employed. Developed by the Russian economist
Leonid Kantorovich (1912-86) and the US economist C. Koopmans (1910-86), on the basis
of the work of the Russian mathematician Andrei Nikolaevich Kolmogorov (1903-87).
Since they can't produce negative numbers of calculators, I have the two
constraints, x> 0 and y > 0. But in this case, I can ignore these constraints, because I
already have that x > 100 and y > 80. The exercise also gives
maximums: x < 200 and y < 170. The minimum shipping requirement gives me x + y > 200;
in other words, y > –x + 200. The profit relation will be my optimization equation: P = –2x +
5y. So the entire system is:
8. Sampling Theory
drawn from the population. Sampling theory is applicable only to random samples. For this
common trait or traits. In other words, a universe is the complete group of items about which
knowledge is sought. The universe may be finite or infinite. Infinite universe is one which has
a definite and certain number of items, but when the number of items is uncertain and
infinite, the universe is said to be an infinite universe. Similarly, the universe may be
hypothetical or existent. In the former case the universe in fact does not exist and we can
only imagin the items constituting it. Tossing of a coin or throwing a dice are examples of
hypothetical universe. Existent universe is a universe of concrete objects i.e., the universe
where the items constituting it really exist. On the other hand, the term sample refers to that
part of the universe which is selected for the purpose of investigation. The theory of
sampling studies the relationships that exist between the universe and the sample or
The main problem of sampling theory is the problem of relationship between a parameter
and a statistic. The theory of sampling is concerned with estimating the properties of the
population from those of the sample and also with gauging the precision of the estimate.
This sort of movement from particular (sample) towards general (universe) is what is known
as statistical induction or statistical inference. In more clear terms “from the sample we
attempt to draw inference concerning the universe. In order to be able to follow this inductive
universe (finite or infinite) and investigate the behaviour of the samples drawn from this
universe applying the laws of probability.” The methodology dealing with all this is known as
sampling theory.
Example: An auto analyst is conducting a satisfaction survey, sampling from a list of 10,000
new car buyers. The list includes 2,500 Ford buyers, 2,500 GM buyers, 2,500 Honda buyers,
and 2,500 Toyota buyers. The analyst selects a sample of 400 car buyers, by randomly
sampling 100 buyers of each brand.
(A) Yes, because each buyer in the sample was randomly sampled.
(B) Yes, because each buyer in the sample had an equal chance of being sampled.
(C) Yes, because car buyers of every brand were equally represented in the sample.
(D) No, because every possible 400-buyer sample did not have an equal chance of being
chosen.
(E) No, because the population consisted of purchasers of four different brands of car.
Solution
The fact that each buyer in the sample was randomly sampled is a necessary
condition for a simple random sample, but it is not sufficient. Similarly, the fact that
each buyer in the sample had an equal chance of being selected is characteristic of a
simple random sample, but it is not sufficient. The sampling method in this problem
used random sampling and gave each buyer an equal chance of being selected; but
the sampling method was actually stratified random sampling.
The fact that car buyers of every brand were equally represented in the sample is
irrelevant to whether the sampling method was simple random sampling. Similarly,
the fact that population consisted of buyers of different car brands is irrelevant.
There are three main areas of decision theory. Each studies a different type of
decision making. Descriptive decision theory examines how irrational beings make
decisions. Prescriptive decision theory tries to provide guidelines for agents to make
the best possible decisions given an uncertain decision-making framework.
Normative decision theory provides guidance for making decisions given a set of
values.