Anda di halaman 1dari 112

GENERALIZATION OF SIMPLEX METHOD WITH ANALYTICAL AND

COMPUTATIONAL TECHNIQUES FOR SOLVING LINEAR


PROGRAMMING PROBLEM

By

MONJUR MORSHED
Student No. 100609003P
Registration No. 100609003P, Session: October-2006

MASTER OF PHILOSOPHY
IN
MATHEMATICS

Department of Mathematics
Bangladesh University of Engineering & Technology
Dhaka-1000, Bangladesh
December, 2010
GENERALIZATION OF SIMPLEX METHOD WITH ANALYTICAL AND
COMPUTATIONAL TECHNIQUES FOR SOLVING LINEAR
PROGRAMMING PROBLEM

A thesis submitted to the


Department of Mathematics, BUET, Dhaka-1000
in partial fulfillment of the requirement for the award of the degree of

MASTER OF PHILOSOPHY
IN
MATHEMATICS

By

MONJUR MORSHED
Student No. 100609003P
Registration No. 100609003P, Session: October-2006

Under the supervision


of
Dr. Md. Abdul Alim
Associate Professor
Department of Mathematics

Bangladesh University of Engineering & Technology


Dhaka-1000, Bangladesh
December, 2010

ii
The thesis titled

GENERALIZATION OF SIMPLEX METHOD WITH


ANALYTICAL AND COMPUTATIONAL TECHNIQUES FOR
SOLVING LINEAR PROGRAMMING PROBLEM

Submitted by
MONJUR MORSHED
Student No. 100609003P, Registration No. 100609003P, Session: October-2006
a part-time student of M. Phil. (Mathematics) has been accepted as satisfactory in partial
fulfillment for the degree of
Master of Philosophy in Mathematics
on December 11, 2010

BOARD OF EXAMINERS

1. ______________________________________
Dr. Md. Abdul Alim Chairman
Associate Professor (Supervisor)
Department of Mathematics, BUET, Dhaka

2. ______________________________________
Head Member
Department of Mathematics, BUET, Dhaka (Ex-Officio)

3. ______________________________________
Dr. Md. Mustafa Kamal Chowdhury Member
Professor
Department of Mathematics, BUET, Dhaka

4. ______________________________________
Dr. Md. Elias Member
Professor
Department of Mathematics, BUET, Dhaka

5. ______________________________________
Dr. Mohammad Babul Hasan Member

iii
Assistant Professor (External)
Department of Mathematics, Dhaka University, Dhaka.

DEDICATION

Dedicated
To
My Parents

iv
Abstract

In this thesis, we have studied the established traditional simplex methods of


Dantzig for solving linear programming problem (LP) by replacing one basic variable by
one non-basic variable at each simplex iteration, suggest to generalize the traditional
simplex methods for solving linear programming problem (LP) by replacing more than one
(P, where P • 1) basic variables by non-basic variables at each simplex iteration and
compare the methods between themselves. To apply these methods on large-scale real life
linear programming problem, we need computer-oriented program of these methods. To
fulfill this purpose, we developed computer program based on (MATHEMATICA)
language of these methods and apply on a sizable large-scale real life linear programming
problem of a garment industry and textile mill scheduling problem. In this thesis we also
developed a computational technique using mathematica codes to show the feasible region
of two-dimensional linear programming problems accurately as well as this method also
gives the optimal solution. Finally, conclusion is drawn in favour of the developed
generalized simplex method.

v
Author’s Declaration

This is to certify that the work presented in this thesis is the outcome of the investigation
carried out by the author under the supervision of Dr. Md. Abdul Alim, Associate Professor,
Department of Mathematics, Bangladesh University of Engineering and Technology
(BUET), Dhaka-1000 and that it has not been submitted anywhere for the award of any
degree or diploma.

Monjur Morshed

Date: 11 December, 2010

vi
Acknowledgements

The author would like to mention with gratitude Almighty ALLAH’S continual
kindness without which no work would reach its goal.

The author is highly grateful and obliged to his honorable supervisor Dr. Md. Abdul
Alim, Associate Professor, Department of Mathematics, BUET, Dhaka for his continuous
guidance, constant support, supervision, valuable suggestions, inspiration, infinite patience,
friendship and enthusiastic encouragement throughout this work.

The author express his deep regards to his honorable teacher, Dr. Md. Abdul Hakim
Khan, Professor and Head, Department of Mathematics, Bangladesh University of
Engineering and Technology for providing help, advice and necessary research facilities.

The author is also grateful to Prof. Dr. Md. Mustafa Kamal Chowdhury, the former
Head of the Department of Mathematics and Prof. Dr. Md. Elias, Prof. Dr. Md. Abdul
Maleque, Prof. Dr. Monirul Alam Sarkar, Prof. Dr. Nilufar Farhat Hossain, Department of
Mathematics, BUET, Dhaka for their wise and liberal co-operation in providing me all
necessary help from the department during my course of M. Phil. Program. The author
would also like to extend his thanks to all respectable teachers, Department of Mathematics,
BUET, Dhaka for their constant encouragement.

The author thanks the members of the Board of Examination namely Prof. Dr. Md.
Abdul Hakim Khan, Prof. Dr. Md. Mustafa Kamal Chowdhury, Prof. Dr. Md. Elias, and Dr.
Mohammad Babul Hasan for their contributions and for their flexibility and understanding
in helping his meet such an ambitious schedule.

The foundation for his education and success started at home. The author credits his
parents, Muhammed Sirajul Islam and Maleka Begum for shaping him into the person he is
today. Their unwavering love and support throughout his life has given him the confidence
and ability to pursue his academic and personal interests. The author expresses his heartfelt
gratitude and thanks to his beloved wife, sisters, family members and friends for their
constant encouragement during this work.

Finally, the author acknowledges the help, co-operation of all office staff of this
Department.

vii
Contents
Abstract ...............................................................................................................v
Author’s Declaration ....................................................................................... vi
Acknowledgements ......................................................................................... vii
NOMENCLATURE......................................................................................... xi
CHAPTER 1 .......................................................................................................1
INTRODUCTION ................................................................................................................... 1
1.1 Introduction: ....................................................................................................................... 1
1.2 Mathematical Model: ......................................................................................................... 3
1.3 Mathematical Programming: .............................................................................................. 4
1.4 Mathematical Programming problem or Mathematical Program (MP): ............................ 4
1.5 General Mathematical form of Linear Programming (LP): ............................................... 6
1.6 Formulation of Linear Programming Problem:.................................................................. 8
1.7 Standard Linear Programming: ........................................................................................ 10
1.7.1 Reduction to Standard Form: ................................................................................... 12
1.7.2 Feasible Canonical Form:......................................................................................... 13
1.7.3 Relative Profit Factors:............................................................................................. 14
1.7.4 Some Important Theorems of Standard Linear Program: ........................................ 15
1.8 A Real Life Production Problem of a Garment Industry (Standard Group): ................... 15
1.9 A Real Life Problem of a Textile Mill: ............................................................... 20
1.9.1 Introduction: ....................................................................................................... 20
1.9.2 Textile Mill Scheduling problem: ................................................................ 20
1.9.3 Formulation of the Textile Mill Scheduling problem: ............................................. 21
Chapter 2 ..........................................................................................................25
Linear Programming Models: Graphical and Computer Methods ......................................... 25
2.1 Steps in Developing a Linear Programming (LP) Model: ............................................... 25
2.1.1 Properties of Linear Programming Models: ............................................................. 25
2.1.2 Mathematical Formulation of Linear Programming problem: ................................. 25
2.2 Graphical Method: ........................................................................................................... 26
2.2.1 Real Life Example of Model Formulation (Otobi Furniture Co.): ........................... 26
2.2.2 Graphical Solution: .................................................................................................. 28
2.3 LP Characteristics: ........................................................................................................... 30
2.3.1 Special Situation in LP: ............................................................................................ 30
2.4 Numerical Example-1: ..................................................................................................... 32

viii
2.5 Mathematica Codes for Graphical Representation of Feasible Region: .......................... 33
2.5.1 Numerical Example- 2: ............................................................................................ 33
2.5.2 Numerical Example- 3: ............................................................................................ 35
2.6 Conclusion: ...................................................................................................................... 37
Chapter 3 ..........................................................................................................38
SIMPLEX METHOD AND COMPUTER ORIENTED ALGORITHM FOR SOLVING
LINEAR PROGRAMMING PROBLEMS............................................................................ 38
3.1 Introduction: ..................................................................................................................... 38
3.2 Simplex Method: .............................................................................................................. 38
3.2.1 Computational steps for solving (LP) in simplex method: ...................................... 40
3.2.2 Properties of the Simplex Method:........................................................................... 41
3.2.3 The standard form of (LP) is in canonical form: ...................................................... 42
3.2.4 The Standard Form of (LP) is Not in a Canonical Form:......................................... 43
3.3 Artificial Variable Technique: ......................................................................................... 43
3.3.1 The Big-M Simplex Method: ................................................................................... 43
3.3.2 The Two-Phase Simplex Method: ............................................................................ 44
Chapter 4 ..........................................................................................................46
MORE THAN ONE BASIC VARIABLES REPLACEMENT IN SIMPLEX METHOD
FOR SOLVING LINEAR PROGRAMMING PROBLEMS ................................................ 46
4.1 Paranjape’s Two-Basic Variables Replacement Method for Solving (LP): .................... 46
4.1.1 Algorithm: ................................................................................................................ 46
4.1.2 New Optimizing Value: ........................................................................................... 48
4.1.3 Optimality Condition: .............................................................................................. 49
4.1.4 Criterion-1: (Choices of the entering variables into the basis):................................ 50
4.1.5 Criterion-2: (Choices of the out going variables form the basis): ............................ 50
4.2 Agrawal and Verma’s Three Basic Variables Replacement Method for Solving (LP): .. 51
4.2.1 Algorithm: ................................................................................................................ 51
4.2.2 New Optimizing Value: ........................................................................................... 54
4.2.3 Optimality Condition: .............................................................................................. 55
4.2.4 Criterion-1: (Choices of the entering variables into the basis):................................ 55
4.2.5 Criterion-2: (Choices of the out going variables form the basis): ............................ 56
4.3 Numerical example: ......................................................................................................... 56
Chapter 5 ..........................................................................................................60
GENERALIZATION OF SIMPLEX METHOD FOR SOLVING LINEAR
PROGRAMMING PROBLEMS ........................................................................................... 60
5.1 P-Basic Variables Replacement Method for Solving (LP): ............................................. 60
5.1.1 Algorithm: ................................................................................................................ 60
5.1.2 New Optimizing Value: ........................................................................................... 65
5.1.3 Optimality Condition: .............................................................................................. 66
5.1.4 Criterion-1: (Choices of the entering variables into the basis):................................ 66
5.1.5 Criterion-2: (Choices of the out going variables form the basis): ............................ 67
5.2 The Combined Algorithm: ............................................................................................... 67

ix
5.3 Mathematica Codes: ......................................................................................................... 68
5.3.1 The combined program in Mathematica (Eugere, Wolfram): .................................. 69
5.3.2 Numerical Examples and Comparison: .................................................................... 74
5.4 Solution of LP on a production problem of a garment industry (Standard Group) using
combined program: ................................................................................................................ 76
5.5 Solution of LP on Textile Mill Scheduling problem using combined program: ... 78
5.6 Conclusion: ...................................................................................................................... 79
Chapter 6 ..........................................................................................................81
COUNTER EXAMPLES OF MORE THAN ONE BASIC VARIABLES REPLACEMENT
AT EACH ITERATION OF SIMPLEX METHOD .............................................................. 81
6.1 Introduction: ..................................................................................................................... 81
6.1.1 Numerical Example 1: .............................................................................................. 81
6.1.2 Numerical Example 2: .............................................................................................. 86
6.2 Conclution: ....................................................................................................................... 91
Chapter 7 ..........................................................................................................92
CONCLUSION ...................................................................................................................... 92
References .............................................................................................................................. 94

x
NOMENCLATURE

OR Operation Research

LP Linear Programming

LPP Linear Programming Problem

FPP Fractional Programming Problems

LFP Linear Fractional Program

LFPP Linear Fractional Programming Problem

MP Mathematical Program

NLP Non-Linear Program

NLPP Non-Linear Programming Problem

QPP Quadratic Programming Problem

xi
.

CHAPTER 1

INTRODUCTION
1.1 Introduction:
Mathematical programming or linear programming is one of the most widely
used techniques in operations research. Many practical problems in operations research
can be expressed as linear programming (LP) problems. Certain special cases of linear
programming, such as network flow problems and multicommodity flow problems are
considered important enough to have generated much research on specialized algorithms
for their solution. In many cases its application has been so successful that its use has
become an accepted routine planning tool. It is therefore rather surprising that
comparatively little attention has been paid to the problems of formulating and building
mathematical programming models as well as developing computer technique for
solving linear programming problems.

The study of operation research is of great importance to the researcher because


of their applications in many branches of science and Engineering. Some of the earlier
researchers studied the problems related with optimization technique. At first George
Bernard Dantzig develop simplex method in 1950. The simplex method is an iterative
procedure for solving a linear program in a finite number of steps and provides all the
information about the program. Dantzig (1962) developed a solution method for solving
linear programming problem (LP) by replacing one basic variable by one non-basic
variable at each simplex iteration. Assuming the compactness of the constraint set S and
applying the transformation, y = tx, t• 0 Charnes and cooper (1962) transformed linear
fractional programming (LFP) to two linear programs and solved either or both of the
linear programs and hence solved the LFP. Paranjape (1965) developed a method which
replaces two-basic variables by two non-basic variables at each iteration of simplex
method for solving LP. Agarwal and Verma (1977) generalized the method of Paranjape
for solving LP by replacing 3-basic variables at each iteration. Kanchan (1976) extended
Paranjape’s method for solving (LFP) and Gupta and Sharma (1983) further extended
Kanchan’s method for solving quadratic programming problem(QP). Forhad (2004)
compared different methods for solving linear fractional programming problem.

1
.

In this research we have generalized the simplex method of one variable


replacement to simplex method of P variables replacement, where P• 1. We also
developed a computer techniques for solving LP problems of replacing more than one
basic variable by non-basic variables at each simplex iteration.

For the sake of self-containness of the thesis we first briefly discuss the linear
programming models as well as graphical and computers methods in Chapter 2. In this
chapter we have developed a computational technique using mathematica codes to show
the feasible region of two-dimensional linear programming problems and which also
give the optimal solution.

In Chapter 3, we briefly discuss the usual simplex method and computer


oriented algorithm for solving linear programming problems.

In Chapter 4, we present more than one basic variables replacement methods of


Paranjape and Agrawal & Verma for solving linear programming problem (LP). We
also give a numerical example to demonstrate both the methods.

In Chapter 5, we present the generalization of simplex method for solving linear


programming problems. We also give a combined program in mathematica for solving
large scale real life problem by more than one basic variables replacement methods.

In Chapter 6, we illustrate some counter example to highlight the more than one
basic variables replacement at each iteration of simplex method as well as graphically,
numerically and by using our combined program in programming language mathematica.

Thus the method developed in this thesis is an extension of traditional simplex


type method by replacing more than one basic variables by non-basic variables at each
simplex iteration. A large scale LP problem, which involves a numerous amount of data,
constraints and variables, cannot be handle analytically with pencil and paper. To
overcome the complexities of large-scale Linear Programming (LP) problem here we
develop a combined program in mathematica for solving LP by more than one basic
variables replacement at each iteration of simplex method. To illustrate the purpose, we
solve a sizable large-scale LP Problem of Textile Mill Scheduling problem, which
is formulated in section 1.8. To present our study, we required the following
prerequisites:

2
.

1.2 Mathematical Model:


Many application of science makes use of models. The term ‘model’ is usually
used for structure has been built purposely to exhibit features and characteristics of some
other object. Generally only some of these features and characteristics will be retrained
in the model depending upon the use to which it is to be put. More often in Operations
Research we will be concerned with abstract models. These models will usually be
mathematical in that algebraic symbolism will be used to mirror the internal relationships
in that object (often an organization ) being modeled. Our attention will mainly be
confined to such mathematical models although the term ‘model’ is sometimes used
more widely to include purely descriptive models.

The essential feature of a mathematical model in Operation Research is that it


involves a set of mathematical relationship (such as equations, inequalities, logical
dependencies, etc) which correspond to some down-to-earth relationships in a real world
(such as technological relationships, physical laws, marketing constraints, etc)

There are a number of motives for building such models:

• The actual exercise of building a model often reveals relationships, which were
not apparent to many people. As a result a greater understanding is achieved of
the object being modeled.
• Having built a model it is usually possibly to analysis it mathematically to help
suggest courses of action, which might not otherwise be apparent.
• Experimentation is possible with a model whereas it is often not possible or
desirable to experiment with the object being modeled. It would clearly be
politically difficult as well as undesirable to experiment with unconventional
economic measures in a country if there was a high probability of disastrous
failure. The pursuit of such courageous experiments would be more (though not
perhaps totally) acceptable on a mathematical model.

It is important to realize that a model is really defined by the relationships which


it incorporates. These relationships are to large extent, independent of data in the model.
A model may be used on many different occasions with differing data, e.g. cost,
technological coefficients, resource availability’s, etc. We would usually still think of it
as the same model even though some coefficients had changed. This distinction is not, of
course, total radical changes in the data would usually be thought of as a changing the
relationships and therefore the model.

3
.

1.3 Mathematical Programming:


Mathematical programming is one of the most widely used techniques in
Operations Research. In many cases its application has been so successful that its use has
passed out of Operations Research departments to become an accepted routine planning
tool. It is therefore rather surprising that comparatively little attention has been paid in
the literature to the problems of formulating and building mathematical programming
models even deciding when such model is applicable.

It should be pointed out immediately that mathematical programming is very


different from Computer Programming. Mathematical programming is ‘Programming’ in
the sense of ‘planning’. As such it need have nothing to do with Computers. The
confusion over the use of world ‘programming’ is widespread and unfortunate.
Inevitably mathematical programming becomes involved with computing since practical
problems almost always involves large quantities of data and arithmetic which can only
reasonably be tackle by the calculating power of a computer. The correct relationship
between Computers and Mathematical Programming should, however, be understood.

The common feature which mathematical programming models have is they all
involve Optimization. We want to maximize something. The quantity by which we want
to maximize or minimize is known as an objective function. Unfortunately the realization
that Mathematical Programming is concerned with optimizing an objective often leads
people to summarily dismiss Mathematical programming as being inapplicable in
practical situation where there is no clear objective or there are a multiplicity of
objectives.

In this thesis we confine our attention to a special sort of a Mathematical


Programming Model, called a linear programming model and its related problems.

1.4 Mathematical Programming problem or Mathematical


Program (MP):

Mathematical Programming problem or Mathematical Program (MP) deals with


the optimization (maximization or minimization) of a function of several variables

4
.

subject to a set of constraints (inequalities or equalities) imposed on the values of


variables.

A general mathematical programming problem can be stated as follows:

(MP) Maximize f(x) 1.1

Subject to xεS = {x : g i (x)≤0, i =1,2,3, …… ,m} 1.2

Where x = (x 1 ,x 2 ,x 3 , …… x n )T is the vector of unknown decision variables and


f(x),g i (x), (i =1,2,3, ……… ,m ), are real valued functions of the n real variables
x 1 ,x 2 ,x 3 , ………… x n .The function f is called objective function and (1.2) is referred
to as constraints.

The model of mathematical programming in which all the functions appearing in


it are linear decision variables x is called a linear programming problem (LP).Among
the mathematical programs the linear programming problem(LP) is a well known
optimization technique. The mathematical model of a linear programming problem (in its
canonical form) is as follows:

( LP) Maximize Z = cTx 1.3

Subject to xεS = {xεRn : Ax≤b, x ≥o} 1.4

Where A is an m ×n matrix, x, cεRn , bεRn , cT denotes transpose of c.

We have started the MP as maximization one. This has been done without any
loss of generality, since a minimization problem can always be converted into a
maximization problem using the identity

min f(x)=max(-f(x))

i.e., the minimization of f(x) is equivalent to the maximization of (-f(x)).


The set S is normally taken as a connected subset of Rn . Here the set S is taken
as the entire space Rn. The set X ={x ∈S, g i (x) ≤ 0 ; i=1,2,…,m, } is known to as the
feasible region, feasible set or constraint set of the program MP and any point x∈X is a
feasible solution or feasible point of the program MP which satisfies all the constraints of

5
.

MP . If the constraint set X is empty (i.e. X=φ ) , then there is no feasible solution ; in
this case the program MP is inconsistent .
A feasible point x0∈X is known as a global optimal solution to the program MP if
f ( x) ≤ f ( x 0 ) , x ∈ X 1.5

A global optimal solution x0 of MP program is indeed a global maximum point


of the program MP. A point x0 is said to be a strict global maximum point of f(x) over X
if the strict inequality (<) in (1.5) holds for all x ∈ X and x = x0 .
A point x*∈X is a local or relative maximum point of f(x) over X if there exists
some ε >0 such that
f ( x) ≤ f ( x * ) , ∀x ∈ X ∩ N ε ( x * ) .

Where N ε (x*) is the neighborhood of x* having radius ε . Similarly , global


minimum and local minimum can defined by changing the sense of inequality.
The MP can be broadly classified into two categories: unconstrained
optimization problem and constrained optimization problem. If the constraint set X is
the whole space Rn, program MP is then known as an unconstrained optimization
problem, in this case, we are interested in finding a point of Rn at which the objective
function has an optimum value . On the contrary, if X is a proper subset of Rn .

If both the objective function and the constraint set are linear, then MP is called a
linear programming problem (LPP) or a linear program (LP)
On the other hand, non-linearity of the objective function or constraints gives rise
to non-linear programming problem or a non-linear program (NLP). Several
algorithms have been developed to solve certain NLP.

1.5 General Mathematical form of Linear Programming (LP):

The mathematical expression of a general linear programming problem (LP) is


As follows:

(LP) Maximize (or Minimize)

Subject to

6
.

n
Z = ∑cjxj
j =1

∑a ij x j {≤, =, ≥}bi ; i = 1,2,.............m (1.6)


j =1

Where one and only one of the signs ≤ , = , ≥ holds for each constraint in (1.6)
and the sign may vary from one constraint to another.

Here c j (j = 1,2, …...,n ) are called profit (or cost) coefficients, x j (j = 1,2,………,n ) are
called decision variables. The set of feasible solution to (LP) is

S = { (x 1 ,x 2 ,……,x n )T : (x 1 ,x 2 ,……,x n )T ε Rn and (1.6) holds at (x 1 ,x 2 ,……,x n )T }

The set S is called the constraints set, feasible set or feasible region of (LP).

In matrix vector notation the above problem can be expressed as :

Maximize (or Minimize) Z = cx

Subject to Ax (≤ , = , ≥ ) b

where A is an m×n matrix, x is an (n×1) column vector, b is an (m×1) column vector


and c is a (1×n) row vector.

Convex Set :

A set SεRn is called a convex set if x1,x2 εS => λ x1 +(1-λ) x2εS for all (0≤ λ ≤1).

The empty and singleton sets are treated as convex sets. A set S is clearly convex if the
line segment joining any two points of S lies in S. It should be noted that the number of points in
a convex set is zero, one or infinite.

Extreme Point :

Let S⊆Rn be a convex set. A point xεS is called an extreme point or vertex of S if there
exist no two distinct points x1 and x2 in S such that

x = λ x1 + (1- λ) x2 for 0< λ <1.

7
.

1.6 Formulation of Linear Programming Problem:

The procedure for mathematical formulation of linear programming problem consists of


the following major steps :

Step 1:
Identify the unknown variables to be determined (decision variables) and represent them
in terms of a algebraic symbols.

Step 2:

Formulate the other conditions of the problem such as resource limitations, market
constraints and inter-relation between variables etc. as linear equations or enequations in
terms of decision variables.

Step 3:
Identify the objective or criterion and represent it as linear function of the decision
variables, which is to be maximized or minimized.

Step 4:

Add the ‘Non-negativity’ constraint from the consideration that negative values of the
decision variables do not have any valid physical interpretation.

The objective function, the set of constraints and the non-negative constraints together
from a linear programming problem.

We now recall the following are basic results of a linear programming problem (LP)
from Kambo [1984] and Gass [1984].

Theorem 1.1
The constraint or feasible set of a linear programming problem is a convex set.

Proof:
Consider the linear programming problem

(LP) Minimize z = Σ cjxj

8
.

Subject to
n
x∈ S = {x :∑ aij x j {≤, =, ≥}bi ; i = 1,2,.............m
j =1

= {x :a T x(≤, =, ≥ )bi ; i = 1,2,.............m}

We have to prove that S is a convex set. The definition tells us that S is a intersection
of H, H + and H - . By theorem we know that the sets H, H + , H - , H + 0 and H - 0 are all convex sets.
So H, H + , and H - are convex sets. Also by theorem we know that the intersection of any
collection of convex set is a convex set. So S is a convex set. Hence the theorem is proved.

Theorem 1.2
The set of optimal solutions to the linear programming(L.P) is convex.

Proof:
Let x0 = (x 1 0,x 2 0,………,x n 0)T and y0 = (y 1 0, y 2 0, …….y n 0)T be two optimal solutions to
program (LP). Then cTx0 = cTy0 = min z

where c = (c 1 ,c 2 ,……,c n )T. Since x0 and y0 are feasible for (LP) and the feasible set S is

convex, then λ x0+(1- λ)y0εS for 0≤ λ ≤1.

Also cT (λ x0 + (1- λ) y0 )

= λ cTx0 + (1- λ)cTy0

= λ min z + (1- λ) minz

= min z

Hence λ x0+(1- λ)y0 is also an optimal solution for all 0≤ λ ≤1. This means that the set of all
optimal solutions to the linear programming Problem is a convex set.

Theorem 1.3 : (Fundamental Theorem)

Let the constraint set T be non-empty closed and bounded. Then an optimal solution to
the Linear Problem (LP) exists and it is attained at a vertex of the constraint set T.

9
.

Proof:
Since the is non-empty and compact and Z = cT x is continuous and an optimal solution
exists. The number of the vertices of the convex polyhedron T is finite. Let the vertices of T be
x1,x2,........,xk (xi ∈Rn for all i). Then the set T is equal to the convex hull of the points x1, x2,
............., xk. Thus any feasible point x ∈ T can be written as

k
X= ∑λ X
i =1
i
i

k
Where λ i ≥ 0 (i=1,...........,k) and ∑λ
i =1
i
=1

Z = cT x

Let Z 0 = min { cT xi, i = 1,..........,k}. Then for any x ∈ T. We obtain that,


k
= cT ∑
i =1
λX i
i

k
=∑
i =1
λc i
T
Xi

= λ1 c T x 1 + λ 2 c T x 2 + ........... + λ k c T x k

≥ z0 (λ1 + λ2 + ........... + λk )
= z0

Hence the minimum value of cT x over T is Z 0 and is attained at a vertex of T.

1.7 Standard Linear Programming:

A problem of the form

(LP1) Maximize z = cTx

Subject to:

Ax = b 1.7

x >0 1.8

10
.

is known as a linear program in standard form. The characteristics of this form are:

All the constraints are expressed in the form of equations, except for the non-negative
restrictions.

The right hand side of each constraint equation is non-negative.

In (LP1), the m × n matrix A = (a ij ) is the coefficient matrix of the equality


constraints, b = (b 1 , b 2 , ......, b m )T is the vector of right hand side constants, the
component of c are the profit factors, x = (x 1 , x 2 , ......, x n )T ∈ Rn is the vector of
variables, called the decision variables and (1.8) are the non-negativity constraints. The
column vector of the matrix A are referred to as activity vectors. We recall the following
definition for standard linear program.

Feasible Solution:

x j (j = 1, 2, ......, n) is a feasible solution of the standard linear programming (LP1) if it is


satisfies conditions (1.7) and (1.8).

Basic Solution:

A basic solution to (1.7) is a solution obtained by setting (n-m) variables equal to


zero and solving for the remaining m variables, provided the determinant of the
coefficients of these m variables are non-zero. The m variables are called basic variables.

Basic Feasible Solution:

A basic feasible solution is a basic solution, which also satisfies (1.8) that is, all
basic variables are non-negative.

Degenerate Solution:

A basic feasible solution to (1.7) is called degenerate if one or more the basic
variables are zero.

Non- degenerate Basic Feasible Solution:

A non -degenerate basic feasible solution is a basic feasible solution with exactly
m positive x i , that is, all basic variables arc positive.

11
.

Optimal Solution:

A basic feasible solution is said to be optimal solution if it maximize the


objective function while satisfying condition (1.7) and (1.8) provided the maximum
value exists.

1.7.1 Reduction to Standard Form:

Every general linear program can be reduced to an equivalent standard linear


program as explained below.

( i) Conversion of right hand side constraint to non-negative

If a right hand side constant of a constraint is negative , it can be made non-negativity by


multiplying both sides of the constraints by –1(if necessary).

( ii) Conversion of inequality constraint to equality

Slack Variable:

For an inequality constraint of the form


n
∑ aij x j ≤ b j (i = 1,2,......, m ; bi ≥ 0)
j =1
,

Adding a non-negative variable xn+1 can be made equation

n
∑ a ij x j + x n +1 = bi (i = 1,2,......., m)
j =1

and the non-negative variable xn+1 is called the slack variable .

Surplus Variable :

For an inequality constraint of the form


n

∑a
j =1
ij x j ≥ bj (i = 1,2,......, m ; bi ≥ 0)

subtracting a non-negative variables xn+1 can be made equation

n
∑ a ij x j − x n +1 = bi (i = 1,2,......., m)
j =1

12
.

and the non-negative variable xn+1 is called the surplus variable .

(iii) Making All Variables-Non-Negative

All variables in the equivalent linear program can be made non-negative as follows:

i) If xi < 0, then put xi/ = -xi clearly xi/ > 0.

ii) If xi is unrestricted in sign (i.e. a free variables ), then

Put xi = xi/- xi// where xi/, xi// > 0.

(iv) Conversation of Minimization Problem

Since, Min f(x) =Max {-f (x)}

The minimization of f (x) over F is equivalent to the maximization of -f (x) over F.This
enables us to convert a minimization problem into the equivalent maximization problem
(if necessary).

1.7.2 Feasible Canonical Form:

Consider the constraints (1.7) i.e. Ax = b, are consistent and rank (A) = m (< n).

Let B be any non singular m × m submatrix made up of the columns of A and R is the
remainder portion of A. Further, suppose that X B is the vector of variables associated
with columns of B. Then (1.2) can be written as

[B, R] 
xB 
=b [

 x NB 
or, B x B + Rx NB = b

That is the solution of (1.2) is given by


x B =B-1b -B-1 Rx NB

or, x B + B-1bRx NB = B-1b 1.9

where the (n -m) variables x NB can be assigned arbitrary values. The form (1.9) of
constraint is called the canonical form in the variables x B . The particular solution of
(1.7) given by

13
.

x B =B-1b , x NB = 0 1.10

is called the basic solution to the system Ax = b with respect to the basic matrix B. The
variables x NB are known as the non basic variables and the variables x B are said to be
the basic variables. It should be noted that the column of A associated with the basic
matrix B is linearly independent and that all non-basic variables are zero in a basic
solution. The basic solution given by (1.9) is feasible if x B > 0.

1.7.3 Relative Profit Factors:


Suppose that there exists a feasible solution to the constraint (1.7) and (1.8). The
coefficients of the variables in the objective function z, after the basic variables from it
have been eliminated, are called relative profit factors. In order to find relative profit
factors corresponding to basis matrix B, we partition the profit vector c as
cT = (C B T, C NB T), where c B and c NB are the profit vectors corresponding to the variables
x B and x NB The objective function then is

z = cT B

= c B Tx B + c NB Tx NB 1.11

Substituting in this equation the values of x B from (1.4), we get

z = c B T B-1b – c B TB-1 R x NB + c NB T x NB

= z - (c B T B-1R - c NB T ) x NB

= z - c B Tx B - c NB Tx NB

= z - c Tx

Where ,
c = ( c B , c NB )T,

c B = 0.

c NB T = c B T B-1 R- c NB T

z = c B T B-1 b

14
.

Here c is the vector of relative profit factors corresponding to, the basis matrix B
and z is the value of the objective function at the basic solution given by (1.10). Observe
that the components of c corresponding to the basic variables are zero, which ought to be
as is evident from the definition of c .

1.7.4 Some Important Theorems of Standard Linear Program:


We now state the following results from Kambo [I984].

Theorem 1.4: If standard linear program with the constraints Ax = b and x > 0,
where A is an m × n matrix of rank m, has a feasible solution, then it also has a basic
feasible solution.

Theorem 1.5: Let F be a convex polyhedron consisting of all vectors x ∈ Rn satisfying


the system Ax = b, x > 0, where A is an m × n matrix of rank m. Then, x is an extreme
point of F if and only if x is a basic feasible solution to the system.

The above theorem ensures that every basic feasible solution to a (LP) is an extreme
point of the convex set of feasible solutions to the problem and that every extreme point
is a basic feasible solution corresponding to one and only one extreme point of the
convex set of feasible solution and vice versa.

1.8 A Real Life Production Problem of a Garment Industry


(Standard Group):

Standard group is one of the prominent garment industries in Bangladesh. They


have huge contributions to our national GDP. They company wishes to expand its
business activities across national boundaries. The owner of Standard Group has $
400000 by which he can produce maximum 1500 pieces of garment items per day. The
owner wishes to produce different garment items (Men’s long sleeve shirt, Men’s short
sleeve shirt, Men’s long pant, Men’s shorts, Ladies long pant, Ladies shorts, Boys long
pant, Boys shorts, Men’s boxer, Men’s fleece jacket, Men’s jacket, Ladies jacket, Boys
jacket) He has the following data for per piece:

15
.

S/ Name of

overhead
/
Accessories cost ($)
N Garment items

Labor/CM Cost ($)


Packaging cost ($)
Washing cost ($)
Fabrics cost ($)

Total cost ($)


Management
production/

Return ($)
cost ($)
1 Men’s long 2.90 .25 .18 .18 .90 .18 4.59 7.09
sleeve shirt
2 Men’s short 2.20 .22 .25 .20 1.0 .25 4.12 6.32
sleeve shirt
3 Men’s long pant 3.50 .30 .20 .17 .85 .22 5.24 8.24
4 Men’s shorts 3.0 .25 .22 .19 1.0 .23 4.89 7.59
5 Ladies long 3.20 .30 .18 .18 .90 .20 4.96 7.76
pant
6 Ladies shorts 2.75 .06 .20 .18 .90 .18 4.27 6.57
7 Boys long pant 2.70 .25 .17 .18 .80 .16 4.26 7.46
8 Boys shorts 2.20 .15 .05 .10 .25 .05 2.80 4.90
9 Men’s boxer 1.0 .30 .15 .20 .80 .20 2.65 3.65
10 Men’s fleece 3.20 .75 .40 .45 2.0 .50 7.30 10.80
jacket
11 Men’s jacket 5.20 .60 .35 .40 1.80 .40 8.75 13.75
12 Ladies jacket 4.40 .50 .30 .35 1.50 .30 7.35 12.85
13 Boys jacket 3.70 .20 .20 1.0 .20 .25 5.55 11.55

In addition, the group of industries has the following limitations of expenditures:


Maximum investment for fabrics is $ 4050
Maximum investment for accessories is $ 1200
Maximum investment for washing is $ 800
Maximum investment for packaging is $ 720
Maximum investment for labor/CM is $ 2200
Maximum investment for Management/production/overhead is $ 880

And the industry has a fixed expenditure for each day is $ 4300

16
.

Determine how many of each garment items should be produce for maximum daily
profit.

The objective is to maximize the profit. This leads to a LP.


Formulation:

The three basic steps in constructing a LP model are as follows:


Step1: Identify the unknown variables to be determined (decision variables) and
represent them in terms of algebraic symbols.
Step 2: Identify all the restrictions or constraints in the problem and express them as
linear equations or inequalities, which are linear functions of the unknown variables.
Step 3: Identify the objective or criterion and represent it as a linear functions of the
decision variables, which is to be maximized (or minimized).

Now, we shall formulate above problem as follows:

Step 1: (Identify the Decision variables)


For this problem the unknown variables are the number of RMG items produced for
different product. So, let
x 1 = The number of RMG items- Men’s long sleeve shirt need to be produced
x 2 = The number of RMG items- Men’s short sleeve shirt need to be
produced
x 3 = The number of RMG items- Men’s long pant need to be produced
x 4 = The number of RMG items- Men’s shorts need to be produced
x 5 = The number of RMG items- Ladies long pant need to be produced
x 6 = The number of RMG items- Ladies shorts need to be produced
x 7 = The number of RMG items- Boys long pant need to be produced
x 8 = The number of RMG items- Boys shorts need to be produced
x 9 = The number of RMG items- Men’s boxer need to be produced
x 10 = The number of RMG items- Men’s fleece jacket need to be produced
x 11 = The number of RMG items- Men’s jacket need to be produced
x 12 = The number of RMG items- Ladies jacket need to be produced
and x 13 = The number of RMG items- Boys jacket need to be produced

17
.

Step 2: (Identify the Constraint)


In this problem constraints are the limited availability of fund for different purposes as
follows:.
1. Since the company wishes to produce maximum 1500 pieces RMG items, so we have
x1 + x 2 + x3 + x 4 + x5 + x6 + x7 + x8 + x9 + x10 + x11 + x12 + x13 ≤ 1500
2. Since the company has Maximum investment for fabrics is $ 4050, so we have
2.90 x1 + 2.20 x 2 + 3.50 x3 + 3.00 x 4 + 3.20 x5 + 2.75 x6 + 2.70 x7 + 2.20 x8 + x9 + 3.20 x10
+ 5.20 x11 + 4.40 x12 + 3.70 x13 ≤ 4050

3. Since the company has Maximum investment for Accessories is $ 1200 so we have

.25 x1 + .22 x 2 + .30 x3 + .25 x 4 + .30 x5 + .06 x6 + .25 x7 + .15 x8 + .30 x9


+ .75 x10 + .6 0 x11 + .50 x12 + .20 x13 ≤ 1200

4. Since the company has Maximum investment for washing is $ 800, so we have

.18 x1 + .25 x 2 + .20 x3 + .22 x 4 + .18 x5 + .20 x6 + ..17 x7 + .05 x8 + .15 x9


+ .40 x10 + .35 x11 + .30 x12 + .20 x13 ≤ 800

5. Since the company has Maximum investment for packaging is $ 720, so we have
.18 x1 + .20 x 2 + .17 x3 + .19 x 4 + .18 x5 + .18 x6 + .18 x7 + .10 x8 + .20 x9
+ .45 x10 + .40 x11 + .35 x12 + .10 x13 ≤ 720

6. Since the company has Maximum investment for labor/CM is $ 2200, so we have

.90 x1 + .x 2 + .85 x3 + x 4 + .9 x5 + .90 x6 + .80 x7 + .25 x8 + .80 x9


+ 2 x10 + 1.8 x11 + 1.5 x12 + .20 x13 ≤ 2200

7. Since the company has Maximum investment for management/production/overhead is


$ 880, so we have

.18 x1 + .25 x 2 + .22 x3 + .23 x 4 + .20 x5 + .18 x6 + .16 x7 + .05 x8 + .20 x9


+ .50 x10 + .40 x11 + .3012 + .25 x13 ≤ 880
We must assume that the variables x i , i=1,2, …….,13 are not allowed to be negative.
That is, we do not make negative quantities of any product.
Step 3: (Identify the objective)
In this case, the objective is to maximize the profit by different RMG items. That is,

18
.

Maximize F ( x) = 2.5 x1 + 2.2 x 2 + 3 x3 + 2.7 x 4 + 2.8 x5 + 2.3 x 6 + 3.2 x 7


+ 2.1x8 + x9 + 3.5 x10 + 5 x11 + 5.5 x12 + 6 x13

Now, we have expressed our problem as a mathematical model. Since the objective
function is to maximize the profit by different RMG items and all of the constraints
functions are linear , the problem can be modeled as the following LP model:

Maximize F ( x ) = 2.5 x1 + 2.2 x 2 + 3x3 + 2.7 x 4 + 2.8 x5 + 2.3x 6 + 3.2 x 7


+ 2.1x8 + x9 + 3.5 x10 + 5 x11 + 5.5 x12 + 6 x13

Subject to
x1 + x 2 + x3 + x 4 + x5 + x 6 + x 7 + x8 + x9 + x10 + x11 + x12 + x13 ≤ 1500

2.90 x1 + 2.20 x 2 + 3.50 x3 + 3.00 x 4 + 3.20 x5 + 2.75 x 6 + 2.70 x 7 + 2.20 x8 + x9 + 3.20 x10
+ 5.20 x11 + 4.40 x12 + 3.70 x13 ≤ 4050

.25 x1 + .22 x 2 + .30 x3 + .25 x 4 + .30 x5 + .06 x 6 + .25 x 7 + .15 x8 + .30 x9


+ .75 x10 + .6 0 x11 + .50 x12 + .20 x13 ≤ 1200

.18 x1 + .25 x 2 + .20 x3 + .22 x 4 + .18 x5 + .20 x 6 + ..17 x 7 + .05 x8 + .15 x9


+ .40 x10 + .35 x11 + .30 x12 + .20 x13 ≤ 800

.18 x1 + .20 x 2 + .17 x3 + .19 x 4 + .18 x5 + .18 x 6 + .18 x 7 + .10 x8 + .20 x9


+ .45 x10 + .40 x11 + .35 x12 + .10 x13 ≤ 720

.90 x1 + .x 2 + .85 x3 + x 4 + .9 x5 + .90 x6 + .80 x7 + .25 x8 + .80 x9


+ 2 x10 + 1.8 x11 + 1.5 x12 + .20 x13 ≤ 2200

.18 x1 + .25 x 2 + .22 x3 + .23x 4 + .20 x5 + .18 x6 + .16 x7 + .05 x8 + .20 x9


+ .50 x10 + .40 x11 + .3012 + .25 x13 ≤ 880

x1 , x 2 , x3 , x 4 , x5 , x6 , x7 , x8 , x9 , x10 , x11 , x12 , x13 ≥ 0

Thus the given problem has been formulated as a LP. We will solve this formulated
problem by using our developed computer program.

19
.

1.9 A Real Life Problem of a Textile Mill:


1.9.1 Introduction:
Linear programming has proven to be one of the most successful quantitative
approaches to decision making. Applications have been reported in almost every
industry. Problems studied include production scheduling, media selection, financial
planning, capital budgeting, product mix, blending and many others. As the variety of
applications suggests, linear programming is a flexible problem-solving tool.

In this section we present a real life problem called Textile Mill


Scheduling introduce by Jeffrey D. Camn, P.M. Dearing and Suresh K.
Tadisnia which is given as an exercise in ANDERSON [2000]. We
formulate this problem and solve it by using our MATHEMATICA
computer program.

1.9.2 Textile Mill Scheduling problem:


The Scottsville Textile Mill produces five different fabrics. Each fabric can be
woven on one or more of the mills 38 looms. The sales department has forecast demand
for the next month. The demand data are shown in table 1 along with date on the selling
price per yard. Manufacturing cost per yard and purchase price per yard. The mill
operates 24 hours a day and is scheduled for 30 days during the coming month.

Table 1

Fabric Demand Selling Price Manufacturing Purchase Price


(yards) ($/ yard) Cost ($/ yard) ($/ yard)

1 16,500 0.99 0.66 0.80


2 22,000 0.86 0.55 0.70
3 62,000 1.10 0.49 0.60
4 7500 1.24 0.51 0.70
5 62,000 0.70 0.50 0.70

20
.

The mill has two types of looms: dobbie and regular. The dobbie looms are more
versatile and can be used for all five fabrics. The regular looms can produce only three of
the fabrics.
The mill has a total of 38 looms, 8 are dobbie and 30 are regular. The rate of
production for each fabric on each type of loom is given in Table 2. The time required to
change over from producing one fabric to another is negligible and does not have to be
considered.

Table- 2
Loom Production Rates (yards/hour)
Fabric Dobbie Regular
1 4.63 -----
2 4.63 -----
3 5.23 5.23
4 5.23 5.23
5 4.17 4.17

The Scottsville Textile mill satisfies all demand with either its own
fabric or purchased from another mill. That is fabric that cannot be woven
at the Scottsville mill because of limited loom capacity will be purchased
from another mill.

Determine how many of each fabric should be woven and how many
should be purchased for maximum monthly profit.

1.9.3 Formulation of the Textile Mill Scheduling problem:

Step1: Identify the Decision Variables

let x 11 = amount of Fabric-1 Woven by Dobbie looms


x 12 = amount of Fabric-1 Purchased from another mill
x 21 = amount of Fabric-2 Woven by Dobbie looms.
x 22 = amount of Fabric-2 Purchased from another mill
x 31 = amount of Fabric-3 Woven by Dobbie looms.

21
.

x 32 = amount of Fabric-3 Woven by Regular lomms.


x 33 = amount of Fabric-3 Purchased from another mill
x 41 = amount of Fabric-4 Woven by Dobbie lomms
x 42 = amount of Fabric-4 Woven by Regular looms
x 43 = amount of Fabric-4 Purchased from another mill
x 51 = amount of Fabric-5 Woven by Dobbie looms
x 52 = amount of Fabric-5 Woven by Regular looms
x 53 = amount of Fabric-5 Purchased from another mill.

Step 2: Identify the Constraints

The market demand of fabric-1 is 16,500 yards and the textile mill
satisfies all demand with either its own fabric or fabric purchased from
another mill thus the demand constraint for Fabric –1 is
x 11 + x 12 = 16,500
The others demand constraints for fabric-2, 3, 4, 5, are
x 21 + x 22 = 22,000
x 31 + x 32 + x 33 = 62,000
x 41 + x 42 + x 43 = 75,00
x 51 + x 52 + x 53 = 62,000

There are 8 dobbie looms and every loom works 24 hours a day and 30 days in a month.
Thus The total dobbie loom time = 8 × 24 × 30 = 5760 hour.
All Fabrics are woven by dobbie looms.
Loom Production rates of Fabric –1 is = 4.63 yards/ hour.
Thus 4.63 yards of Fabric-1 will produce at 1 hour
x11
∴ x 11 yards of Fabric-1 will produce hours
4.63
Hence time requirement for Fabric-1 will be 0.22 x 11 hours
Similarly Fabric-2, Fabric-3 Fabric-4, Fabric-5 will need 0.22 x 22 hours,
0.20x 31 hours, 0.20 x 41 hours and 0.24 x 51 hours successively.
Thus the total requirement of time will be
0.22x 11 +0.22x 21 +0.20x 31 +0.20x 41 +0.24x 51 which should not exceed the
available dobbie loom time 5760 hours. So the constraint becomes.

22
.

0.22x 11 + 0.22x 21 + 0.20x 31 + 0.20x 41 + 0.24x 51 ≤ 5760.

Again, 30 regular loom have 30 × 24 × 30 = 21600 hours. Regular loom can make
Fabric-3 Fabric-4 and Fabric-5. Similarly Fabric-3, Fabric-4, fabric-5 require 0.20x 32
hours, 0.20x 42 hours and 0.24 x 52 hours.
Thus the time constraint becomes

0.20 x 32 + 0.20 x 42 + 0.24 x 52 ≤ 21600

Step 3 : Identify Objective function

The Objective function is to maximize the total profit from sales. The selling
price of fabric-1 is = 0.99 $ / yard, thus profit after manufacturing Fabric-1 is = 0.99 -
0.66 = 0.33 $/yard. Profit from x 11 yard is = 0.33 x 11 $
Again purchasing cost is = 0.80 $/yard, thus profit after purchasing fabric-1 is =
0.99 - 0.80 = 0.19 $/yard. Profit from x 12 yard is = 0.19 x 12 $.
Similarly the other profit from rest of the decision variables are
0.31x 21 , 0.16x 22 , 0.61x 31 , 0.61x 32 , 0.50x 33 , 0.73x 41 , 0.73x 42 , 0.54x 43 , 0.20x 51 , 0.20x 52 ,
0.0 x 53 .

Thus the objective functions to maximize the total profit is

Z = 0.33 x 11 + 0.19x 12 + 0.31x 21 + 0.16 x 22 + 0.61 x 31 + 0.61 x 32 +


0.50 x 33 + 0.73 x 41 + 0.73x 42 + 0.54 x 43 + 0.20x 51 + 0.20x 52 + 0.00 x 53 .

Step-4 : Identify non-negative constraints

Since the amount of fabric x ij produce or not in mill, we have to


restrict the variables non-negative. That is x ij ≥ 0 ( where i = 1,2,3,4,5 &
j = 1,2 ) & x 33 , x 43 , x 53 ≥0.

Hence the linear programming model for our Textile mill Scheduling problem becomes
Maximize

23
.

Z = 0.33 x 11 + 0.19 x 12 + 0.31 x 21 + 0.16 x 22 + 0.61 x 31 + 0.61 x 32 + 0.50

x 33 + 0.73 x 41 + 0.73 x 42 + 0.54 x 43 + 0.20 x 51 + 0.20 x 52 + 0.00 x 53

Subject to

x 11 + x 12 = 16500

x 21 + x 22 = 22000

x 31 + x 32 + x 33 = 62000

x 41 + x 42 + x 43 = 7500

x 51 + x 52 + x 53 = 62000

0.22x 11 + 0.22x 21 + 0.20x 31 + 0.20x 41 + 0.24x 51 ≤ 5760.

0.20 x 32 + 0.20 x 42 + 0.24 x 52 ≤ 21600

x ij ≥ 0 ( where i = 1,2,3,4,5 & j = 1,2 ) & x 33 , x 43 , x 53 ≥0.

Thus the given problem has been formulated as a LP. We will solve this formulated
problem by using our developed computer program.

24
Chapter 2

LINEAR PROGRAMMING MODELS: GRAPHICAL AND


COMPUTER METHODS
2.1 Steps in Developing a Linear Programming (LP) Model:

There are three steps in developing a Linear Programming (LP) model:

1) Formulation
2) Solution
(i) Graphical Method
(ii) Numerical Method
3) Interpretation and Sensitivity Analysis
2.1.1 Properties of Linear Programming Models:
1) Seek to minimize or maximize
2) Include “constraints” or limitations
3) There must be alternatives available
4) All equations are linear
2.1.2 Mathematical Formulation of Linear Programming problem:
Linear programming deals with the optimization of a function of variables known
as objective function, subject to set of linear equalities/inequalities known as constraints.
The objective function may be profit, loss, cost, production capacity or any other measure
of effectiveness which is to be obtained in the best possible or optimal manner. The
constraints may be imposed by different sources such as market demand, production
processes and equipment, storage capacity, raw material availability, etc. by linearity is
meant a mathematical expression in which the variables have unit power only.

Linear programming is used for optimization problems that satisfy the following
conditions:

1. There is a well defined objective function to be optimized and which can be


expressed as a linear function of decision variables.
2. There are constraints on the attainment of the objective and they are capable of
being expressed as linear equalities/inequalities in terms of variables.
3. There are alternative courses of action.
4. The decision variables are interrelated and non-negative.
5. Resources are in limited supply.

2.2 Graphical Method:


A linear programming problem with only two variables presents a simple case, for
3, 4
which the solution can be derived using a graphical method . This method consists of
the following steps:

Step-1. Represent the given problem in mathematical form, i.e. , formulate an L.P.
model for the given problem.

Step-2. Represent the given constraints as equalities on x 1 ,x 2 co-ordinates plane and


find the convex region formed by them.

Step-3. Plot the objective function.

Step-4. Find the vertices of the convex region and also the value of the objective function
at each vertex. The vertex that gives the optimum value of the objective function gives the
optimal solution to the problem.

In general, a linear programming problem may have

(i) A definite and unique optimal solution,


(ii) An infinite number of optimal solutions,
(iii) An unbounded solution, and
(iv) No solution.

2.2.1 Real Life Example of Model Formulation (Otobi Furniture Co.):


Otobi is one of the largest and reputed furniture companies in Bangladesh. They
have started their operation since 1975. They have produced diversified furniture products
in different sections. Here we collected data from one section, which produces only chairs
and tables. The company has the following data

Tables Chairs
(per table) (per chair) Hours Available

26
Profit Contribution $7 $5

Carpentry 3 hrs 4 hrs 2400

Painting 2 hrs 1 hr 1000

Other Limitations:
• Make no more than 450 chairs
• Make at least 100 tables
Determine how many of each furniture item should be produce for maximum daily
profit.

Formulation:

Decision Variables:
T = Num. of tables to make
C = Num. of chairs to make
Objective Function: Maximize Profit
Maximize $7 T + $5 C
Constraints:
• Have 2400 hours of carpentry time available
3 T + 4 C < 2400 (hours)
• Have 1000 hours of painting time available
2 T + 1 C < 1000 (hours)

More Constraints:
• Make no more than 450 chairs
C < 450 (num. chairs)
• Make at least 100 tables
T > 100 (num. tables)

Nonnegativity:
Cannot make a negative number of chairs or tables

27
T>0
C>0
Model Summary:
Max 7T + 5C (profit)
Subject to the constraints:
3T + 4C < 2400 (carpentry hrs)
2T + 1C < 1000 (painting hrs)
C < 450 (max. chairs)
T > 100 (min. tables)
T, C > 0 (non-negativity)

2.2.2 Graphical Solution:


• Graphing an LP model helps provide insight into LP models and their solutions.

• While this can only be done in two dimensions, the same properties apply to all LP
models and solutions.
C
Carpentry
Constraint Line
3T + 4C = 2400
Infeasible
600 > 2400 hrs
Intercepts
(T = 0, C = 600)
(T = 800, C = 0)
Feasible
< 2400 hrs

0 0 800 T
Figure-1

28
C
1000
Painting
Constraint Line
2T + 1C = 1000
600
Intercepts
(T = 0, C = 1000)
(T = 500, C = 0)

0
0 500 800 T

Figure-2

C
1000
Max Chair Line
C = 450

Min Table Line 600


T = 100
450

Feasible
Region
0
0 100 500 800 T

Figure-3

29
C

Objective
Function Line 500
7T + 5C = Profit
Optimal Point
400
(T = 320, C =
360)

300

200

100

0 0 100 200 300 400 500 T


Figure-4
The company should produce 320 pieces of table and 360 pieces of chair Determine
for maximum daily profit

2.3 LP Characteristics:

• Feasible Region: The set of points that satisfies all constraints


• Corner Point Property: An optimal solution must lie at one or more corner points
• Optimal Solution: The corner point with the best objective function value is optimal

2.3.1 Special Situation in LP:

1. Redundant Constraints - do not affect the feasible region


Example: x < 10
x < 12
The second constraint is redundant because it is less restrictive.

30
2. Infeasibility – when no feasible solution exists (there is no feasible region)
Example: x < 10
x > 15

3. Alternate Optimal Solutions – when there is more than one optimal solution

Max 2T + 2C C
Subject to: All points on
T + C < 10 Red segment
10
T < 5 are optimal
C< 6
T, C > 0 6

0 0 5 10 T

Figure-5

4. Unbounded Solutions – when nothing prevents the solution from becoming


infinitely large

Direction
Max 2T + 2C of solution
Subject to: 2
2T + 3C > 6
T, C > 0
1

0
0 1 2 3 T
Figure-6

31
2.4 Numerical Example-1:
Maximize Z =5x 1 + 8x 2
Subject to 3x 1 + 2x 2 ≤ 36
x 1 + 2x 2 ≤ 20
3x 1 + 4x 2 ≤ 42
x1, x2 ≥ 0

Solution of the above program in graphical method:


The solution space satisfying the given constrains and meeting the non- negativity
restrictions x 1 ,x 2 ≥ 0 is shown shaded in figure-7 below. Any point in this shaded region is
a feasible solution to the given problem.

Figure-7
Feasible region for example - 1
The vertices of the convex feasible region OABCD are O(0,0), A(12,0), B(10,3), C(2,9) and D(0,10).
The value of the objective function at these points are:
Z(O)=0 , Z(A)=60, Z(B)=74, Z(C)=82, and Z(D)=80 .
Since the maximum value of the objective function is 82 and it occurs at C(2,9), the optimal
solution to the given problem is x 1 =2, x 2 =9 with
Z max =82 .

32
2.5 Mathematica Codes for Graphical Representation of
Feasible Region:
In this section we have developed a computational technique using mathematica codes to
show the feasible region of two-dimensional linear programming problems. This method also gives
the optimal solution. We have illustrated two numerical examples (maximization & Minimization) to
demonstrate our method.
2.5.1 Numerical Example-2:
Maximize Z = 2x 1 + 3x 2
Subject to x 1 + x 2 ≤ 30
x2 ≥ 3
x 2 ≤ 12
x 1 -x 2 ≥ 0

0 ≤ x 1 ≤ 20.

Solution:

The solution space satisfying the given constraints and meeting the non-negativity restrictions
x 1 ≥ 0 and x 2 ≥ 0 is shown shaded in Fig. 8. Any point in this shaded region is a feasible solution to
the given problem.
Mathematica Codes for Graphical Representation:

<<Graphics `ImplicitPlot`
<<GraPhics `Colors`
<<Graphics `Arrow`
11 = ImplicitPlot [{ x1+x2 == 30 , x2 ==3 ,x2 == 12 , x1-x2 == 0 , x1 == 20 },
{x1, 0 ,25} , {x2, 0 ,25} , PlotStyle -> {Blue , Maroon , Green , Brown , Purple} ,
DisplayFunction -> Identity] ;
p1 = Graphics [{Maroon , Polygon[{{3,3} , {12,12} , {18,12} , {20,10} , {20,3}}]}] ;
t1 = Graphics [{Text [“A(3,3)”, {3.5 , 2.5}] , Text [“B(12,12)”, {12.5 , 12.5}],
Text [“C(18,12)”, {18.6 , 12.5}] , Text [“D(20,10)”, {22.5, 10.5}] ,
Text [“E(20,3)”, {22.2 , 2.5}]}] ;
t2 = Graphics [{Text [“x2• 3”,{23 , 3.5}] , Text [“x2≤ 12”, {23, 12.5}] ,
Text [“x1-x2• 0”, {5.2, 8}] , Text[“x1+x2≤ 30”, {13, 20}] ,
Text [“x1≤ 20”, {21.5, 6}]}] ;
a1 = Graphics [{Arrow [{5, 25}, {4, 24}, HeadScaling -> Relative] ,
Arrow [{25, 25} , {26, 24}, HeadScaling -> Relative] ,
Arrow [{25, 12} , {25, 11}, HeadScaling -> Relative] ,
Arrow [{25, 3} , {25, 4}, HeadScaling -> Relative] ,

33
Arrow [{20, 25} , {19, 25}, HeadScaling -> Relative]}] ;
Show [{l1 , p1 , t1 ,t2 , a1} , AxesLabel -> {“x1” , “x2”} ,
Ticks -> {{3 ,6 , 9 , 12 ,15 , 18 , 21} , {3 , 6 , 9 , 12 , 15 , 18 , 21}} ,
DisplayFunction -> $DisplayFunction]

Figure-8
Feasible region for example 2

The co-ordinates of the five vertices of the convex region ABCDE are A(3,3),

B(12,12), C(18,12), D(20,10) and E(20,3).

Mathematica Codes for Optimal Value of the Objective Function:

INPUT:
z [ x1_, x2_] : = 2 x1 + 3 x2 ;
v = { z[ 3 , 3] , z[ 12 , 12 ] , z[ 18 , 12] , z[ 20 , 10 ] , z[ 20 , 3]}
optimal = Max [v]

OUTPUT:
{ 15 , 60 , 72 , 70 , 49 }
72
Since the maximum value of Z is 72, which occurs at the point C(18,12), the solution to the
given problem is x 1 = 18, x 2 = 12 with
Z max = 72.

34
Remark-1: If we solve this problem by usual simplex method we need to use artificial
variables and to apply 2 phase simplex method or Big-M simplex method, which needs 7
iterations. But it is time consuming and clumsy method.

2.5.2 Numerical Example-3:


(LP) Minimize Z = - x 1 + 2x 2
Subject to - x 1 + 3x 2 ≤ 10
x1 + x2 ≤ 6
x1 - x2 ≤ 2
x1, x2 ≥ 0
Solution :
The solution space satisfying the given constraints and meeting the non-negativity
restrictions x 1 ≥0 and x 2 ≥0 is shown shaded in Fig. 9. Any point in this feasible region is
a feasible solution to the given problem.

Mathematica Codes for Graphical Representation:

13 = ImplicitPlot [{-x1+3*x2 == 10 , x1+x2 == 6 , x1-x2 == 2 }, {x1, -2 ,10 } ,


{x2, -2, 8} , PlotStyle -> { Blue , Maroon , Green } , DisplayFunction -> Identity] ;
p3 = Graphics [{ Hue [ .55] , Polygon [{{0, 0} , {0, 10/3} , {2, 4} , {4, 2} , {2, 0}}]}] ;
t5 = Graphics [{Text [“O(0,0)”, {-.15 , -.2 }] , Text [“A(0, 10/3)”, {1.2 , 3.5}] ,
Text [“B(2, 4)”, {2.8 , 4.4}] , Text [“C(4, 2)”, {5, 2.1}] ,
Text [“D(2,0)”, {3.2 , .2}]}] ;
t6 = Graphics [{Text [“-x1+3*x2≤10”, {4.5 , 5.5}] , Text [“x1+x2≤6”, {6.5, .8}] ,
Text [“x1-x2≤2”, {7.5, 4}]}] ;
a3 = Graphics [{Arrow [{-2, 8}, {-2.5, 7.5}, HeadScaling -> Relative] ,
Arrow [{10, 8} , {9.5, 8.5}, HeadScaling -> Relative] ,
Arrow [{10, 6.7} , {10.2, 5.5}, HeadScaling -> Relative]}] ;
Show [{l3 , p3 , t5 , t6 , a3} , AxesLabel -> {“x1” , “x2”} ,
Ticks -> {{2 , 4 , 6 , 8 ,10} , {2 , 4, 6 , 8}} ,
DisplayFunction -> $DisplayFunction]

35
Figure-9
Feasible region for example 3

The coordinates of the vertices of the convex polygon OABCD are O(0,0), A(0,10/3), B(2,4),
C(4,2), and D(2,0)

Mathematica Codes for Optimal Value of the Objective Function:

INPUT:
z [ x1_, x2_] : = -x1 + 2 x2 ;
v = { z[ 0 , 0] , z[ 0 , 10/3 ] , z[ 2 , 4] , z[ 4 , 2 ] , z[ 2 , 0]}
optimal = Min [v]

OUTPUT:
{ 0 , 20/3 , 6 , 0 , –2 }
-2

Since the minimum value of Z is – 2, which occurs at the vertex D(2,0), The
solution to the given problem is x 1 = 2 , x 2 = 0 with

Z max = - 2.

36
2.6 Conclusion:

The solution of linear programming problem is possible to find by graphical


method as well as numerical method. But, we can only use graphical method when the
problem is in two dimensional. For solving a LP problem by graphical method, it is
necessary to plot the graph accurately which is very difficult and also time consuming. To
overcame this difficulties, in this section we developed a computational technique using
mathematica codes to show the feasible region of two-dimensional linear programming
problems and our mathematica codes also give the optimal solution. In usual simplex
method, we need to use artificial variables and have to apply 2 Phase simplex method or Big-M
simplex method when the set of constraints is not in canonical form, which needs many iterations
which is also time consuming and clumsy. But by applying our computational technique using
mathematica codes we can solve any types of problems easily.

37
Chapter 3

SIMPLEX METHOD AND COMPUTER ORIENTED


ALGORITHM FOR SOLVING LINEAR PROGRAMMING
PROBLEMS

3.1 Introduction:

The simplex method is an iterative procedure for solving a linear program in a


finite number of steps and provides all the information about the program. Also it indicates
whether or not the program is feasible. If the program is feasible, it either finds an optimal
solution or indicates that an unbounded solution exists. At first G.B. Dantzig develop this
method in 1950. Following Dantzig [1963], KAMBO [1984], Gillet [1988] described the
simplex method as below.

3.2 Simplex Method:


Basically the simplex method is an iterative procedure that can be used to solve
any linear programming model if the needed computer time and storage are available. It is
assumed that the original linear programming model

r
Maximize z = ∑ c j x j
j =1

r
Subject to : ∑a
j =1
ij x j (>, =, <)bi , bi > 0, i = 1, 2, ........, m

all x j > 0

has been converted to the equivalent standard LP model

n
Maximize z = ∑ c j x j
j =1

r
Subject to : ∑a
j =1
ij x j = bi i = 1, 2, ........, m

all x j > 0
which includes slack variables that have been added to the left side of each less
than or equal to constraint, surplus variables that have been subtracted from the left side of
each greater than or equal to constraint, and artificial variables that have been added to the
left side of than greater than or equal to constraint and each equality. It is assumed that the
profit coefficients for the slack and surplus variables are zero while the profit coefficients
for the artificial variables are arbitrarily small negative numbers (algebraically), say -M.
The equivalent model necessarily assures us that each equation contains a variable with a
coefficient of 1 in that equation and a coefficient of zero in each of the other equations. If
the original constraint was a less than or equal to constraint, the slack variable in the
corresponding equation will satisfy the condition just stated. Likewise, the artificial
variables added to the greater than or equal to constraints and equalities satisfy the
condition for each of the remaining equations in the equivalent model. These slack and
artificial variables are the basic variables in the initial basic feasible solution of the
equivalent problem.

The equivalent model is now rewritten as

Maximize : z

Subject to:

n
z − ∑cjxj = 0
j =1
3.1

∑a
j =1
ij x j = bi , i = 1, 2, ........, m 3.2

all x j > 0

since c j = -M for each artificial variable, we must multiply by -M each equation


representation by (2.2) that contains an artificial variable and add the resulting equations
to equation (2.1) to give.
Maximize : z

n
Subject to : z − ∑ c j x j = b0 3.3
j =1

39
n

∑a
j =1
ij x j = bi , i = 1, 2, ........, m 3.4

all x j > 0

Where b o = -M ∑ bk and * represent the equations containing artificial variables.


*

This assures us that each equation in (3.4) contains a slack or artificial variable that
has a coefficient of 1 in that equation and a coefficient of zero in each of the other
equations in (3.4) as well as in equation (3.3). Equation (3.3) will be refereed to as the
objective function equation. We will now present the general simplex method.

3.2.1 Computational steps for solving (LP) in simplex method:

The computational steps of the simplex method for solving an (LP) which is in
canonical form are as follows: (for maximization problem).

Step 1 : Express the problem in standard form.

Step 2 : Start with an initial basic feasible solution in canonical form and set up the Initial
table.

Step 3 : Use the innerproduct rule to find the relative profit factors (∩ j )as follows, ∩ j = c j
c j - z j =c j (innerproduct of c B and the column corresponding to x j in the canonical
system).

Step 4 : (choice of the entering variable in to the basis )


If all ∩ j ≤0, the current basic feasible solution is optimal. Otherwise,select the
nonbasic variable with the most positive ∩ j to enter the basis and

the corresponding column is called pivot column.

Step 5 : (Choice of the outgoing variable from the basis )


To determine the outgoing variable from the basis, we examine each element of
pivot column to observe howfar the non–basic variable can be increased. For those
constraints in which the non–basic variable has a positive co-efficient, the limit is given by
the ratio of the R.H.S constant to that positive co-efficient. For other constraints the limit

40
is set to ∞ . The constraint with the lowest limit determined, the corresponding row is
called the pivot row, the basic variable in that constraint will be replaced by the non-basic
variable. The element which is at the intersection of the pivot row and pivot column is
called the pivot element.
Since the determination of the variable to leave the basis involves the calculation
of ratios and selection of the minimum ratio, this rule is generally called the minimum
ratio rule.

Step 6 : Perform the pivot operation to get the new table and the basic feasible solution.
That is,

(1) Divide all elements of the pivot row by the pivot element.
(2) Then, in order to obtain zeros in the other places of the pivot column, add
suitable multiples of the transformed pivot row to the remaining rows.

Step 7 :Compute the relative profit factors by using inner–product rule.Return to step-4.
Remark 2.2.1 : each sequence of step–4 to step–7 is called an iteration to the simplex
method. Thus each iteration gives a new table and an improved basic feasible solution.
Remark 2.2.2 : An alternative optimal solution is indicated whenever there exists a non-
basic variable whose relative profit factor ∩ j is zero in the optimal table. Otherwise the
solution is unique.

Remark 2.2.3 : If all the elements in the pivot column are non-positive then this indicates
that the problem has an unbounded solution.
3.2.2 Properties of the Simplex Method:

The important properties of the simplex method are summarized here for convenient ready
reference.

i) The simplex method for maximizing the objective function starts at a


basic feasible solution for the equivalent model and moves to an adjacent
basic feasible solution that does not decrease the value of the objective
function. If such a solution does not exist an optimal solution for the
equivalent model has been reached. That is, if all of the coefficients of the
non-basic variables in the objective function equation are greater than or
equal to zero at some point, then all optimal solution for the equivalent
model has been reached.

41
ii) If an artificial variable is an optimal solution of the equivalent model at a
non-zero level, then no feasible solution for the original model exists. On
the contrary, if the optimal solution of equivalent model does not contain
an artificial variable at a non-zero level, the solution is also optimal for
the original model.
iii) If all of the slack, surplus, and artificial variables are zero when an
optimal solution of the equivalent model is reached, then all of the
constraints in the original model are strict "equalities" for the values of the
variables that optimize the objective function.
iv) If a non-basic variable has a zero coefficient in the objective function
equation when an optimal solution is reached, there are multiple optimal
solutions. In fact, there is an infinity of optimal solutions. The simplex
method finds only one optimal solutions and stops.
v) Once an artificial variable leaves the set of basic variables (the basis), it
will never enter the basis again. So all calculations for that variable can be
ignored in future steps.
vi) When selecting the variable to leave the current basis:
a) If two or more ratios are smallest, choose one arbitrarily.
b) If a positive ratio does not exist, the objective function in the original
model is not bounded by the constraints. Thus, a finite optimal solution for
the original model does not exist.
vii) If a basis has a variable at the zero level, it is called a degenerate basis.
viii) Although cycling is possible, there have never been any practical
problems for which the simplex method failed to converge.

The standard form of the linear programming problem(LP) may be either

1) in canonical form
or 2) not in canonical form .

3.2.3 The standard form of (LP) is in canonical form:


The standard linear programming problem (LP)is of the canonical form if

(LP1) Maximize Z = cx

Subject to Im x B + N XN = b

42
Where Im is m × m identity matrix, x B = (x 1 ,x 2 , ………, x m )is the vector of basic
variables. N=(a ij )is an m× (n-m) submatrix formed by the remaining Column of
A.

If all of the constraints are of “≤ ” type or can be converted to “ • ” type and all
R.H.S constants bi (I=1,2,……..,m) are non-negative,the canonical form Can easily be
obtained. Then we can form the initial basic feasible simplex table.

3.2.4 The Standard Form of (LP) is Not in a Canonical Form:


In some linear programming problem may not have a readily available canonical
Form. In these problems at least one of the constraints is of either “=”or “≥”type. In such
a problem one has to find a basic feasible solution in canonical form before starting initial
simplex table. In such case we follow artificial variables technique.

3.3 Artificial Variable Technique:


In this technique, first the linear programming problem is converted to standard
form then each constraint is examined for the existence of and basic variable. If none is
available, and new variable is added to act as the basic variable in that constraint. These
new variables are termed as artificial variables. There are two methods available to solve
such problems.
i) The big-M simplex method.
ii) The two-Phase simplex method.

3.3.1 The Big-M Simplex Method:

This method consists of the following basic steps:

Step-1 : Express the linear programming problem (LP) to standard form .

step-2 : Add the artificial variables “w i ” to the left hand side of all the constraints of “=”
and “≥” type in the original problem. Therefore we would lick to get rid of these variables
and would not allow them to appear in the final solution. To do so, these artificial
variables are assigned the letter M as the cost in a minimization problem and-M as the
profit in a maximization problem with the assumption that M is a very large positive
number.

Step-3: continue with the regular steps of simplex method of subsection 2.2.1

43
While making iterations, using simplex method, one of the following cases may
arises :

Case-I : If no artificial variable remains in the in positive level and the optimality
condition is satisfied, then the solution is optimal.

Case-II : When the Big-M simplex method terminates with an optimal table, it is
sometimes possible for one or more artificial variables to remain as basic variables at
positive level. This implies that the original problem is infeasible.

Remark 2.2.4 : Remark 2.1.2 and Remark 2.1.3 are also applicable here.

3.3.2 The Two-Phase Simplex Method:


In this method the linear programming problem is solved in two phases.
Phase-1 : As lick as Big-M simplex method one has to add an artificial variable “w i ” to
each of the constraint which are “≥” or “=” type in the original problem. Instead of
original objective function, an artificial objective function y=∑w i is introduced and is then
minimized subject to the constraints of the original problem for minimization problem. For
maximization problem artificial objective function is the negative of minimization
problem.
Then the following cases arise: (for maximization problem)

Case-1 : If max y= -∑w i = 0 and no artificial variables appears in the basis, then a basic
feasible solution to the original problems is obtained. We then move to the Phase II.
Case-2: If max y= -∑w i ≥ 0 and at least one of the artificial variables appears in the basis
at a positive level, then the original problem has no feasible solution and the procedure
terminates.
Remark 2.2.5 : The artificial objective function can always be minimized whatever be the
objective function of original problem and thus one can avoid the negative sign in artificial
objective function.

Phase II : In this phase, the basic feasible solution found at the end of phase I is
optimized with respect to the original objective function. The simplex method is once
again applied to determine the optimal solution as in subsection 2.2.1

44
Remark 2.2.6 : Remark 2.2.2 and Remark 2.2.3 are also applicable here.

45
Chapter 4

MORE THAN ONE BASIC VARIABLES REPLACEMENT


IN SIMPLEX METHOD FOR SOLVING LINEAR
PROGRAMMING PROBLEMS

4.1 Paranjape’s Two-Basic Variables Replacement Method for


Solving (LP):
In this section we present the work of Paranjape’s in which he studied the
replacement of two basic variables by two non basic variables at each iteration of Simplex
method for solving (LP).

4.1.1 Algorithm:
∧ ∧ ∧ ∧ ∧
Let xB be another basic feasible solution to the (LP), where B = ( b1 , b2 ,......., bm ) in the

basis in which br1 and br2 are replaced by bu1 and bu 2 respectively of A but not in B.


The columns of B are given by

bi = bi for i ≠ r1 , r2

br1 = au1

br 2 = au 2

Then the new basic variables can be expressed in terms of the original ones and yiu1 and

yiu2

m
i.e. au1 = ∑Y
i =1
b
iu1 i

m
=> y r1u1br1 + y r2u1 br2 = au1 − ∑y
i ≠ r1, r2
b
iu1 i 4.1
m
Similarly, y r1u 2br1 + yr2u 2 br2 = au 2 − ∑y
i ≠ r1, r2
b
iu 2 i 4.2

Multiplying equation (4.1) by yr2u2 and (4.2) by yr2u1 and Subtracting we have,

( )
m
br1 = 1 (au1 yr2 u 2 − au 2 yr2 u1 ) + 1 ∑ bi yiu 2 yr2 u1 − yiu1 yr2 u 2
k k i≠r r
1, 2

m
au1 − ∑y
i ≠ r1 , r2
b
iu1 i yr2u1
= 1k m
au2 − ∑y
i ≠ r1 , r2
b
iu 2 i yr2u2

Similarly,

m
yr1u1 au1 − ∑y
i ≠ r1 , r2
iu1 ib
br2 = 1
k m
yr1u 2 au 2 − ∑y
i ≠ r1 , r2
b
iu 2 i

y r1u1 y r1u2
Where K =
y r2u1 y r2u2
Now x B = B −1b

=> b=Bx B

m
= ∑ bi x Bi
i =1

m
= ∑b x
i ≠ r1 , r2
i B1 + br1 x Br 1 + br2 x B2

m m

m x Br
au1 − ∑y
i ≠ r1 , r2
b yr2u1
iu1 i
X Br
yr1u1 au1 − ∑y b
iu1 i

∑b x
i ≠ r1 , r2
+
1 2
= i Bi
k
m
+ k
m
i ≠ r1 , r2
au2 − ∑y
i ≠ r1 , r2
b yr2u2
iu 2 i yr1u2 au2 − ∑y b
iu 2 i
i ≠ r1 , r2

47
X Br
m yiu xB r1 yr1u 2 yiu yr1u1 1

∑b { x − }+
1 2
= i Bi
k
- k
X Br
i ≠ r1 , r2 xB r2 yr2 u 2 yr2 u1 2

X Br yr1u2 Yr1u1 X Br
1 1
au 1 + au 2
k X Br 2 yr2u2 k Yr2 u1 X Br
2

m ∧ ∧ ∧
=> b = ∑ bi x Bi + au1 x Br1 + au 2 x Br2
i ≠ r1 , r2
4.3


 ∧ ∧

Where, x Bi = xBi -  yiu1 x Br1 + yiu2 x Br2  4.4
 

∧ X Br y r1u2 
X Br1 = 1 = θ u1 ( say ) 
1

k X y 
Br r2u 2 2

 4.5
∧ y r1u1 x Br1 
X Br2 = 1 = θ u2 ( say )
k yr x Br
u
2 1 2 

∧ ∧

X Br = yr1u1 x Br1 + yr1u2 x Br2 

1
Also ∧ ∧
4.6
X Br2 = yr2u1 x Br1 + yr2u2 x Br2 

4.1.2 New Optimizing Value:

∧ m ∧ ∧
Z = ∑ c Bi xBi
i =1

m ∧ ∧ ∧ ∧ ∧
= ∑ c Bi x Bi + c Br1 x Br1 + cBr2 x Br2
i ≠ r1, r2

m
  ∧ ∧
 ∧ ∧
= ∑c Bi  x B i − yiu1 x Br1 + yiu 2 x Br2

 + cu1 x Br1 + cu 2 x Br2

i ≠ r1, r2 

∧ ∧ ∧
Where c B i = cBi , c Br1 = cu1 , c Br2 = cu2

48
∧ m m ∧
Z = ∑ cBi xBi − cBr1 xBr1 − cBr2 xBr2 − ∑c Bi yiu1 x Br1 −
i =1 i ≠ r1 , r2

m ∧ ∧ ∧
∑ cBi yiu 2 x Br2 + cu1 x Br1 + cu2 x Br 2
i ≠ r1 , r2

 ∧ ∧
  ∧ ∧

= Z − cBr  yr1u1 x Br1 + yr1u 2 x Br2  - cB r2  yr2 u1 x Br1 + yr2 u 2 x Br2 
1
   

m ∧ m ∧ ∧ ∧
− ∑ c Bi yi u1 x Br1 −
i ≠ r1 , r2
∑c
i ≠ r1 , r2
Bi y iu2 x Br2 + cu1 x Br1 + cu2 x Br2

m ∧ m ∧ ∧ ∧
= Z − ∑ cBi yi u1 x Br1 −∑ cBi yiu 2 x Br2 + cu1 x Br1 + cu 2 x Br2
i =1 i =1

( ) ( )
∧ ∧ ∧
∴ Z = Z + cu1 − zu1 x Br1 + cu 2 − zu 2 x Br2

4.1.3 Optimality Condition:


The value of the objective function will improve

If z > z

( ) ( )
∧ ∧
=> z + cu1 − zu1 x Br1 + cu 2 − zu 2 x Br2 > z

( ) ( )
∧ ∧
=> cu1 − zu1 x Br1 + cu2 − zu2 x Br2 > 0

Therefore we get
∧ ∧ ∧
(a) z = z when x Br1 and x Br2 both are separately equal to zero.

(b) z > z if

(
(i) cu1 − zu1 > 0 )
(
(ii) cu 2 − zu 2 > 0 )
In general c j -z j >0

49
4.1.4 Criterion-1: (Choices of the entering variables into the basis):

(i) Choose the u 1 th column of A for which cu1 − zu1 is the greatest positive of
c j -z j , j=1,2,....,n.

(ii) Choose the u 2 th column of A for which cu2 − zu2 is the greatest positive of
c j -z j , j=1,2,....,n. , j ≠ u 1

4.1.5 Criterion-2: (Choices of the out going variables form the basis):

Since the co-efficient of b i in the expression of b should always be non-negative


therefore the condition on the choices of r 1 th and r 2 th columns of B are

(i) x Bi ≥ 0


(ii) x Br1 ≥ 0


(iii) x Br 2 ≥ 0

The above inequalities lead to the conditions for the selection of X BR and
1

X BR respectively as
2

xBr  xB 
(i) Choose X Br for which 1
= min  i , yiu1 > 0 
yr1u1  yiu 
 1 
1
i

xBr2  xB 
(ii) Choose X Br for which = min  i , yiu 2 > 0 
yr2 u 2  yiu 
 2 
2
i

Remark:4.1.1: All the elements in the expression of K are to be non-negative. This


difficulty can be overcome as follows:

After choosing u 1 th the and u 2 th column and forming K, if one see that all the
elements of K are not non-negative then choose the v′th column of A in lieu of u 2 th
column by the alternative criterion such as choose the column of A as the v′ th for which
c v′ -z v′ is the greatest positive c j -z j ; j=1,2,..........,n ;j≠u 1 , u 2 .

Remark : 4.1.2 :

If the two non-basic variables happen to replace the same basic variable i.e.
 x Bi   x Bi 
 >  and  >  occur for the same value of 'i' then one can
min  yiu
, yiu 1
0
 min  yiu
, yiu 2
0

i  1  i  2 
overcome this difficulty by the same procedure as in Remark 3.1.1.

50
Remark 4.1.3:

In the above discussion we express the mathematical expression in determinant form


which is easily accessible to readers.

4.2 Agrawal and Verma’s Three Basic Variables Replacement


Method for Solving (LP):

In this section we present the work of Agrawal and Verma in which they studied
the replacement of three basic variables by three non basic variables at each iteration of
Simplex method for solving (LP).

4.2.1 Algorithm:
∧ ∧
∧ ∧ ∧

Let x B be another basic feasible solution, where B =  b1 , b 2 ......b m  is the basis in
 
which br1 , br2 and br3 are replaced by au1 , au 2 and au3 respectively of A but not in B.


The columns of B are given by

b i = bi , for i ≠ r1 , r2 , r3

b r1 = au1

b r2 = au 2

b r3 = au3

Then the new basic variables can be expressed in terms of the original ones and
yiu1 , yiu 2 , and yiu3

m
i.e. au1 = ∑ yiu1 bi
i =1
m
=> yr1u1 br1 + y r2 u1 b r2 + y r3u1 b r3 = au1 − ∑y
i ≠ r 1, r2 , r3
b
iu1 i 4.7
m
Similarly, y r1u2 br1 + y r2u2 br2 + y r3u2 br3 =au2 − ∑y
i ≠ r1, r2 , r3
b
iu 2 i 4.8
m
and, yr1u3 br1 + yr2 u3 br2 + yr3u3 br3 = au3 − ∑y
i ≠ r 1, r2 , r3
b
iu 3 i 4.9

Solving the above three equations for br1 , br2 and br3 we have,

51
m
au1 − ∑y
i ≠ r 1, r2 , r3
iu1 i b y r2u1 y r3u1

m
1
br1 = au2 − ∑ y iu2 bi y r2u2 y r3u2
k i ≠ r 1, r2 , r3
m
a u3 − ∑y
i ≠ r 1, r2 , r3
iu3 bi y r2u3 y r3u3

m
y r1u1 au1 − ∑y
i ≠ r 1, r2 , r3
b
iu1 i y r3u1

m
1
Similarly br2 = yr u
k 12
au2 − ∑y
i ≠ r 1, r2 , r3
iu 2 bi y r3u2

m
y r1u 3 a u3 _ ∑y
i ≠ r 1, r2 , r3
iu3 bi y r3u3

m
y r1u1 y r2u1 au1 − ∑y
i ≠ r 1, r2 , r3
b
iu1 i

m
1
and br3 = yr u
k 12
y r2u2 au2 − ∑y
i ≠ r 1, r2 , r3
iu 2 bi

m
y r1u3 y r2u3 a u3 − ∑y
i ≠ r 1, r2 , r3
iu3 bi

yr1u1 yr2 u 2 yr1u 3


Where k = yr2 u1 yr2 u 2 yr2 u 3
yr3u1 yr3u 2 yr3u 3

Now X B =B-1b

=> b = Bx B

m
= ∑ bi x Bi
i =1
m
= ∑b x
i ≠ r1 , r2 , r3
i Bi + br1 x Br1 + br2 x Br2 + br3 x Br3

52
m
au1 − ∑
i ≠ r1 , r2 , r3
yiu1 bi yr2 u1 yr3u1

m xBr1 m
= ∑
i ≠ r1 , r2 , r3
bi xBi +
k
au 2 − ∑
i ≠ r1 , r2 , r3
yiu 2 bi yr2 u 2 yr3u 2 +

m
au 3 − ∑
i ≠ r1 , r2 , r3
yiu 3 bi yr2 u 3 yr3u 3

m
yr1u1 au1 − ∑
i ≠ r1 , r2 , r3
yiu1 bi yr3u1

xBr2 m

k
yr1u 2 au 2 − ∑
i ≠ r1 , r2 , r3
yiu 2 bi yr3u 2 +

m
yr1u 3 au 3 − ∑
i ≠ r1 , r2 , r3
yiu 3 bi yr3u 3

m
yr1u1 yr2 u1 au1 − ∑y
i ≠ r 1 , r2 , r3
b
iu1 i

xBr3 m

k
yr1u 2 yr2 u 2 au 2 − ∑y
i ≠ r 1 , r2 , r3
b
iu 2 i

m
yr1u 3 yr2 u 3 au 3 − ∑y
i ≠ r 1 , r2 , r3
b
iu 3 i

xBr yr1u 2 yr1u 3 yr1u1 xBr yr1u 3


1 1

{ yiu1 yiu 2
m
= ∑
i ≠ r1 , r2 , r3
bi xBi −
k
xBr yr2 u 2 yr2 u 3 −
2
k
yr2 u1 xBr yr2 u 3
2

xBr3 yr3u 2 yr3u 3 yr3u1 xBr3 yr3u 3

yr1u1 yr1u 2 xBr  xBr yr1u 2 yr1u 3


1  1
yiu 3  au1
- yr2 u1 yr2 u 2 xBr  + xBr yr2 u 2 yr2 u 3
k 2
 k
2

yr3u1 yr3u 2 xBr3  xBr yr3u 2 yr3u 3


 3

yr1u1 xBr yr1u3 yr1u1 yr1u2 xBr


1 1
au2 au3
+ yr2u1 xBr yr2u3 + yr2u1 yr2u2 xBr
k 2
k 2

yr3u1 xBr yr3u3 yr3u1 yr3u2 xBr3


3

m ∧ ∧ ∧ ∧
=> b = ∑b x i
i ≠ r1, r2 , r3
Bi + au1 x Br1 + au2 x Br2 + au3 x Br3

53

 ∧ ∧ ∧

Where x Bi = xBi −  yiu1 x Br1 + yiu 2 x Br2 + yiu 3 x Br3  4.10
 

xBr1 yr1u 2 yr1u 3 




= θu1 ( say ) 
1
x Br1 = xB yr u yr u
k r2 2 2 2 3
xBr yr3u 2 yr3u 3 
3 

yr1u1 xBr1 yr1u 3 


1 
x Br2 = yr u xB yr u = θu 2 ( say ) 4.11
k 2 1 r2 2 3 
yr3u1 xBr3 yr3u 3 

yr1u1 yr1u 2 xBr 

1 
= θu 3 ( say ) 
1
x Br3 = yr2 u1 yr2 u 2 xBr2
k 
yr3u1 yr3u 2 xBr3 


Also,
∧ ∧ ∧

x Br1 = y r1u1 x Br1 + y r1u2 x Br2 + y r1u3 x Br3 

∧ ∧ ∧

x Br2 = y r2u1 x Br1 + y r2u2 x Br2 + y r2u3 x Br3  4.12
∧ ∧ ∧

x Br3 = y r3u1 x Br1 + y r3u2 x Br2 + y r3u3 x Br3 


4.2.2 New Optimizing Value:


Substituting the new value of the variables in the objective function we get the new
objective function as follows:

∧ m ∧ ∧
z = ∑ c Bi x Bi
i =1

m ∧ ∧ ∧ ∧ ∧ ∧ ∧ ∧
= ∑
i ≠ r1 , r2 , r3
c Bi x Bi + c Br1 x Br1 + c Br 2 x Br2 + c Br 3 x Br3

m
  ∧ ∧ ∧
 ∧ ∧ ∧
= ∑
i ≠ r1 ,r2 ,r3
c Bi  xBi −  yiu1 x Br1 + yiu2 x Br2 + yiu3 x Br3  + cu1 x Br1 + cu2 x Br2 + cu3 x Br3
  

∧ ∧ ∧ ∧
Where c Bi = cBi , c Br 1 = cu1 , c Br 2 = cu 2 , c Br 3 = cu 3

54
∧ m m ∧ m ∧ m ∧
z = ∑ c Bi x Bi − c Br x Br − c Br x Br − c Br x Br − ∑ c Bi yiu1 x Br1 − ∑ c Bi yiu2 x Br2 − ∑c Bi y iu3 x Br3 +
1 1 2 2 3 3
i =1 i ≠ r1 , r2 , r3 i ≠ r1 , r2 , r3 i ≠ r1 , r2 , r3

∧ ∧ ^
cu1 x Br1 + cu2 x Br2 + cu3 x Br3

 ∧ ∧
 ∧
 ∧ ∧ ∧
 ∧
= z − c Br1  y r1u1 x Br1 + y r1u2 x Br2 + y r1u3 x Br3  − c Br2  y r2u1 x Br1 + y r2u2 x Br2 + y r2u3 x Br3  − x Br3
   
 ∧ ∧ ∧
 m ∧ m ∧ m ∧
 y r3u1 x Br1 + y r3u2 x Br2 + y r3u3 x Br3  − ∑ c Bi y iu1 x Br1 − ∑ c Bi y iu2 x Br2 − ∑ c Bi y iu3 x Br3 +
  i ≠ r1 ,r2 ,r3 i ≠ r1 , r2 , r3 i ≠ r1 , r2 , r3
∧ ∧ ∧
cu1 x Br1 + cu2 x Br2 + cu3 x Br3

m ∧ m ∧ m ∧ ∧ ∧ ∧
= z − ∑ cBi yiu1 x Br1 − ∑ cBi yiu2 x Br2 − ∑ cBi yiu3 x Br3 + cu1 x Br1 + cu2 x Br2 + cu3 x Br3
i =1 i =1 i =1

( ) ( ) ( )
∧ ∧ ∧ ∧
z = z + cu1 − zu1 x Br1 + cu 2 − zu 2 x Br2 + cu3 − zu3 x Br3

4.2.3 Optimality Condition:



The value of the objective function will improve if z > z

( ) ( ) ( )
∧ ∧ ∧
=> z + cu1 − zu1 x Br1 + cu 2 − zu 2 x Br2 + cu3 − zu3 x Br3 > z

( ) ( ) ( )
∧ ∧ ∧
=> cu1 − zu1 x Br1 + cu 2 − zu 2 x Br2 + cu3 − zu3 x Br3 > 0
∧ ∧ ∧
But for non-degenerate case x Br1 , x Br2 , x Br3 > 0 . Hence we must have
(i) cu1 − zu1 > 0 ( )
(
(ii) cu2 − zu2 > 0 )
(
(iii) cu3 − zu3 > 0 )
In general c j -z j >0

4.2.4 Criterion-1: (Choices of the entering variables into the basis):


(i) Choose the u 1 th column of A for which cu1 − zu1 is the greatest positive of
c j -z j , j=1,2,...,n.

(ii) Choose the u 2 th column of A for which cu2 − zu2 is the greatest positive of
c j -z j , j=1,2,...,n. , , j≠u 1

(iii) Choose the u 3 th column of A for which cu3 − zu3 is the greatest positive of
c j -z j , j=1,2,...,n. , j≠u 1 , u 2

55
4.2.5 Criterion-2: (Choices of the out going variables form the basis):

xBr  xB 
(i) Choose xBr for which 1
= min i , yiu1 〉 0 
yr1u1 i  y 
 iu1 
1

xBr2  xB 
(ii) Choose xBr for which = min i , yiu 2 〉 0 
yr1u 2 i  y 
 iu 2 
2

xBr3  xB 
(iii) Choose xBr for which = min i , yiu 3 〉 0 
yr1u 3 i  y 
 iu 3 
3

Remark: 4.2.1 : Remark 3.1.1-Remark 3.1.3 are also applicable here.



Remark 4.2.2: In calculation of x B1 (i = 1,2,....m) we replaced the corresponding column of
k by x Bi (column b) instead of only in the first column as was done by agrawal & Verma
[I] and thus avoid unnecessary negative sign and hereby simplify the notation throughout
the chapter and henceforth.

Numerical example:

Maximize Z = 3x 1 +5x 2 +4x 3

Subject to x 1 +3x 2 ≤ 8
2x 2 +5x 3 ≤ 10
3x 1 +2x 2 +4x 3 ≤ 15
x1, x2, x3 • 0

Adding slack variables to the constraints we get the initial table as follows:

Table-1 (Initial Table)

cB Cj → 3 5 4 0 0 0 Constant
↓ x1 x2 x3 x4 x5 x6
b
X B↓
5 X4 1 3 0 1 0 0 8
4 X5 0 2 5 0 1 0 10
3 X6 3 2 4 0 0 1 15
− 3 5 4 0 0 0 Z=0
c j = cj − zj

u3 u1 u2

56
Table-2 (Qptimal Table)

cB Cj → 3 5 4 0 0 0 Constant
↓ X B↓ x1 x2 x3 x4 x5 x6
b
15 4
5 X2 0 1 0 / 43 / 43 -5/ 43 85
/ 43
6 7 2 52
4 X3 0 0 1 - / 43 / 43 / 43 / 43
3 X1 1 0 0 -2/ 43 -12/ 43 15
/ 43 89
/ 43
cj=cj-zj 0 0 0 -45/ 43 -12/ 43 -28/ 43 Z=900/ 43


Since in Table-2 call c j = c j − z j ≤ 0 , this table gives the optimal solution to the
given linear programming problem. Therefore the optimal solution is,

X 1 = 89/ 43 , X 2 =85/ 43 , X 3 =52/ 43

With Z max = 900/ 43

Calculations

Step1 : (choices of the entering variables in to the basis)

Since in Table-1 c j -z j are positive for j=1,2,3 we consider x 1 , x 2 , x 3 to enter in to the


basis.

Step2: (choice of the out going variables from the basis)

Considering the minimum ratio rule (as in criterion2) we see that x 4 , x 5 , x 6 are going
to leave to leave the basis. Also x 1 replaces x 6 , x 2 replaces x 4 and x 3 replaces x 5 . The
entries (y ij ) at the intersection of the entering and leaving variables are called pivot
elements. Here the pivot elements are shown circled in the table.

Step 3: (Formulation of K)

Formulate K with the entries (yij ) by the following formula,

yr1u1 yr1u2 yr1u3 3 0 1


k = yr2u1 yr2u2 yr3u3 = 2 5 0 = 43
yr3u1 yr3u2 yr3u3 2 4 3

Step4: (Formulation of the new table)

We first calculate the new basic variables i.e. the components of constant vector b.

57
xB r yr1u 2 yr1u 3 8 0 1
1
∧ 1 1 85
x Br = x2 = xBr yr2 u 2 yr2 u 3 = 10 5 0 =
1
k 2 43 43
xB r yr3u 2 yr3u 3 15 4 3
3

yr1u1 xB r 1 yr1u 3 3 8 1
∧ 1 1 52
x Br = x3 = yr2 u1 xB r yr2 u 3 = 2 10 0 =
2
k 2
43 43
yr3u1 xB r yr3u 3 2 15 3
3

yr1u1 yr1u 2 xB r 3 8 8
1
∧ 1 1 89
x Br = x1 = yr2 u1 yr2 u 2 xB r = 2 5 10 =
3
k 2
43 43
yr3u1 yr3u 2 xBr3 2 4 15

We now calculate the first column of table2, i.e. we calculate y i1 ; i=1,2,3.


Consider the part of the initial table.

x2 x3 x1
3 0 1 1
2 5 0 0
2 4 3 3

Heavy y11 =1. Therefore

y11 y r1u 2 y r1u3 1 0 1


∧ 1 1
y11 = y12 y r2u 2 y r2u3 = 0 5 0 =0
k 43
y13 y r3u 2 y r3u3 3 4 3

Note : Since y11 =1 corresponds to the pivot element 3 which is in the first column of K we
1
 
replace the first column of K by  0 
 3
 
Similarly

y r1u1 y11 y r1u3 3 1 1


∧ 1 1
y 21 = y r2u1 y12 y r2u3 = 2 0 0 =0
k 43
y r3u1 y13 y r3u3 2 3 3

58
y r1u1 y r1u 2 y11 3 0 1
∧ 1 1
y 31 = y r2u1 y r2u 2 y12 = 2 5 0 =1
k 43
y r3u1 y r3u 2 y13 2 4 3

Similarly we calculate other columns of Table-2.

Remark 4.3.1: If m> (number of replacement variable), then the relations



 ∧ ∧ ∧

x B1 = x Bi −  yiu1 x Br1 + yiu 2 x Br2 = yiu3 x Br3 
 

 ∧ ∧ ∧

y ij = yij −  yiu1 y r1 j + yiu 2 y r2 j + yiu3 y r3 j 
 

are used. Since m=3 in the above example, these relations are not used here.
Similarly Remark 3.1.1 and Remark 3.1.2 are not necessary for the above example.

59
Chapter 5

GENERALIZATION OF SIMPLEX METHOD FOR


SOLVING LINEAR PROGRAMMING PROBLEMS

5.1 P-Basic Variables Replacement Method for Solving (LP):


In this section, we have generalized the simplex method of one basic variable
replacement by non basic variables to simplex method of more than one (P, where P• 1)
basic variables replacement by non basic variables at each iteration of simplex method for
solving (LP).

5.1.1 Algorithm:
∧ ∧
∧ ∧ ∧

Let x B be another basic feasible solution, where B =  b1 , b 2 ......b m  is the basis in
 
which br1 , br2 ….. brP are replaced by au1 , au 2 ….. au P respectively of A but not in B.

The columns of B are given by

b i = bi , for i ≠ r1 , r2 , r3

b r1 = au1

b r2 = au 2

b r3 = au3
………
……...

b rP = au P
Then the new basic variables can be expressed in terms of the original ones and
yiu1 , yiu 2 ,..... yiu P

m
i.e. au1 = ∑ yiu1 bi
i =1
m
=> yr1u1 br1 + y r2 u1 b r2 +.......... + y rP u1 b rP = au1 − ∑y b
iu1 i
i ≠ r 1 , r2 , r3 ...r p
5.1

m
Similarly, yr1u 2 br1 + yr2 u 2 br2 + .......... + yr p u 2 br p = au −
2
∑y b
iu 2 i
i ≠ r 1 , r2 , r3 ...r p
5.2
m
and, yr1u 3 br1 + yr2 u 3 br2 + .......... + yr p u 3 br p = au3 − ∑y b
iu 3 i
i ≠ r 1 , r2 , r3 ...r p
5.3

m
yr1u p br1 + yr2 u p br2 + .......... + yr p u p br p = au p − ∑y b
iu p i
i ≠ r 1 , r2 , r3 ...r p
5.4

Solving the above three equations for br1 , br2 .....br p we have,

m
a u1 − ∑y b
iu1 i
i ≠ r 1, r2 , r3 ...rp
y r2u1 y r3u1   y rp u1

m
au2 − ∑y iu 2
i ≠ r 1, r2 , r3 ...rp
bi y r2u2 y r3u2   y rp u2

m
1
br1 = a u3 − ∑ y iu3 bi y r2u3 y r3u3   Yrp u3
k i ≠ r 1, r2 , r3 ...rp

   
m
au p − ∑y iu p
i ≠ r 1, r2 , r3 ...rp
bi y r2u p y r3u p   Yrp u p

m
yr1u1 au1 − ∑y iu1 i
i ≠ r 1 , r2 , r3 ...r p
b yr3u1   yr p u1

m
yr1u 2 au 2 − ∑y iu 2 i
i ≠ r 1 , r2 , r3 ...r p
b yr3u 2   yr p u 2

m
1
and br2 = yr u
k 13
au 3 − ∑y iu 3 i
i ≠ r 1 , r2 , r3 ...r p
b yr3u 3   Yr p u 3

   
m
yr1u p au p − ∑y iu p i
i ≠ r 1 , r2 , r3 ...r p
b yr3u p   Yr p u p

m
y r1u1 y r2u1 y r3u1   au1 − ∑y iu1 i
i ≠ r 1, r2 , r3 ...rp
b

m
y r1u2 y r2u2 y r3u2   au2 − ∑y iu 2
i ≠ r 1, r2 , r3 ...rp
bi

m
1
Similarly brp = yr u
k 13
y r2u3 y r3u3   a u3 − ∑y iu3
i ≠ r 1, r2 , r3 ...rp
bi

   
m
y r1u p y r2u p y r3u p   au p − ∑y iu p
i ≠ r 1, r2 , r3 ...rp
bi

61
yr1u1 yr1u 2 yr1u 3 ..... yr1u p
yr2 u1 yr2 u 2 yr2 u 3 ..... yr2 u p
Where k = yr3u1 yr3u 2 yr3u 3 ..... yr3u p
   
yr p u1 yr p u 2 yr p u 3 ..... yr p u p

Now X B =B-1b

=> b = Bx B

m
= ∑ bi x Bi
i =1
m
= ∑
i ≠ r1 , r2 , r3 ...r p
bi xBi + br1 xBr + br2 xBr + br3 xBr +   + br p xBrp
1 2 3

m
au1 − ∑y b
iu1 i
i ≠ r 1 , r2 , r3 ...r p
yr2 u1 yr3u1   yr p u1

m
au 2 − ∑y b
iu 2 i
i ≠ r 1 , r2 , r3 ...r p
yr2 u 2 yr3u 2   yr p u 2

m xB r m
= ∑
i ≠ r1 , r2 , r3
bi xBi +
k
1
au 3 − ∑y b
iu 3 i
i ≠ r 1 , r2 , r3 ...r p
yr2 u 3 yr3u 3   Yr p u 3 +

   
m
au p − ∑y b
iu p i
i ≠ r 1 , r2 , r3 ...r p
yr2 u p yr3u p   Yr p u p

m
y r1u1 au1 − ∑y b
iu1 i
i ≠ r 1, r2 , r3 ...rp
y r3u1   y rpu1

m
y r1u2 au2 − ∑y iu 2
i ≠ r 1, r2 , r3 ...rp
bi y r3u2   y rpu2

x Br2 m

k
y r1u3 a u3 − ∑y iu3
i ≠ r 1, r2 , r3 ...rp
bi y r3u3  Yrpu3 +

   
m
y r1u p au p − ∑y iu p
i ≠ r 1, r2 , r3 ...rp
bi y r3u p  Yrpu p

62
m
yr1u1 yr2 u1 yr3u1   au1 − ∑y b
iu1 i
i ≠ r 1 , r2 , r3 ...r p

m
yr1u 2 yr2 u 2 yr3u 2   au 2 − ∑y b
iu 2 i
i ≠ r 1 , r2 , r3 ...r p

xB r m
 +
k
p
yr1u 3 yr2 u 3 yr3u 3   au 3 − ∑y b
iu 3 i
i ≠ r 1 , r2 , r3 ...r p

   
m
yr1u p yr2 u p yr3u p   au p − ∑y b
iu p i
i ≠ r 1 , r2 , r3 ...r p

xBr yr1u2 yr1u3   yr1u p yr1u1 xBr yr1u3   yr1u p


1 1

xBr yr2u2 yr2u3   yr2u p yr2u1 xBr yr2u3   yr2u p


{
2 2
m yiu1 yiu2
= ∑
i ≠ r1 , r2 , r3
bi xBi −
k
xBr
3
yr3u2 yr3u3   yr3u p −
k
yr3u1 xBr
3
yr3u3   yr3u p
       
xBr yrpu2 yrpu3   yrpu p yrpu1 xBrp yrpu3   yrpu p
p

yr1u1 yr1u 2 yr1u 3   xBr  xBr yr1u 2 yr1u 3   yr1u p


1
 1

yr2 u1 yr2 u 2 yr2 u 3   xBr  xB r yr2 u 2 yr2 u 3   yr2 u p


 au1
2 2
yiu p
−  − yr3u1 yr3u 2 yr3u 3   xBr + xB r yr3u 2 yr3u 3   yr3u p
k 3
 k
3

        
yr p u1 yr p u 2 yr p u 3   xBr  xBr p yr p u 2 yr p u 3   yr p u p
p 

yr1u1 xBr yr1u 3   yr1u p yr1u1 yr1u 2 yr1u 3   xBr


1 1

yr2 u1 xBr yr2 u 3   yr2 u p yr2 u1 yr2 u 2 yr2 u 3   xBr


2 2
au 2 au p
+ yr3u1 xBr yr3u 3   yr3u p +   + yr3u1 yr3u 2 yr3u 3   xBr
k 3
k 3

       
yr p u1 xBr yr p u 3   yr p u p yr p u1 yr p u 2 yr p u 3   xB r
p p

m ∧ ∧ ∧ ∧ ∧
=> b = ∑b i
i ≠ r 1 , r2 , r3 ...r p
x Bi + au1 x Br1 + au 2 x Br2 + au 3 x Br3 +   + au p x Brp

63

 ∧ ∧ ∧ ∧

Where x Bi = xBi −  yiu1 x Br1 + yiu 2 x Br2 + yiu 3 x Br3 +   + yiu p x Brp  5.5
 

x Br y r1u2 y r1u3   y r1u p 


1

x Br y r2u2 y r2u3   y r2u p 

2
∧ 1
x Br1 = x Br3 y r3u2 y r3u3   y r3u p = θ u1 ( say ) 
k 
    
x Br y rp u 2 y rp u3   y rp u p 
p



y r1u1 x Br y r1u3   y r1u p 

1

y r2u1 x Br y r2u3   y r2u p 



2
∧ 1
x Br2 = yr u x Br y r3u3   y r3u p = θ u2 ( say ) 
k 31 3

    
y rpu1 x Br y rp u3   y rp u p 
p

Similarly 

y r1u1 y r1u2 y r1u3   x Br 
1

y r2u1 y r2u2 y r2u3   x Br 

2
∧ 1
x Br p = y r3u1 y r3u2 y r3u3   x Br3 = θ u p ( say ) 5.6
k
    

y rpu1 y rp u 2 y rpu3   x Br p 

Also,
∧ ∧ ∧ ∧

xBr = yr1u1 x Br1 + yr1u 2 x Br2 + yr1u 3 x Br3 +   + yr1u p x Brp 
1

∧ ∧ ∧ ∧ 
xBr = yr2 u1 x Br1 + yr2 u 2 x Br2 + yr2 u 3 x Br3 +   + yr2 u p x Brp 
2

∧ ∧ ∧ ∧

xBr3 = yr3u1 x Br1 + yr3u 2 x Br2 + yr3u 3 x Br3 +   + yr3u p x Brp  5.7

     
∧ ∧ ∧ ∧ 
xBrp = yr p u1 x Br1 + yr p u 2 x Br2 + yr p u 3 x Br3 +   + yr p u p x Brp 

64
5.1.2 New Optimizing Value:

Substituting the new value of the variables in the objective function we get the new
objective function as follows :

∧ m ∧ ∧
z = ∑ c Bi x Bi
i =1

m ∧ ∧ ∧ ∧ ∧ ∧ ∧ ∧ ∧ ∧
= ∑
i ≠ r1 , r2 , r3 ,...r p
c Bi x Bi + c Br 1 x Br1 + c Br 2 x Br2 + c Br 3 x Br3 +   + c Br p x Brp

  ∧ ∧ ∧ ∧

m
= ∑
i ≠ r1 , r2 , r3 ,...rp
c Bi  x Bi −  y iu1 x Br1 + y iu2 x Br2 + y iu3 x Br3 +   + y iu p x Br p
 


∧ ∧ ∧ ∧
+ cu 1 x Br1 + cu2 x Br2 + cu3 x Br3 +   + cu p x Br p

∧ ∧ ∧ ∧ ∧
Where c Bi = cBi , c Br 1 = cu1 , c Br 2 = cu 2 , c Br 3 = cu 3 ,  c Br p = cu p

∧ m m ∧
z = ∑ c Bi x Bi − c Br1 x Br1 − c Br2 x Br2 − c Br3 x Br3 −   − c Br x Br − ∑ c Bi yiu1 x Br1
p p
i =1 i ≠ r1 , r2 , r3 ,...rp

m ∧ m ∧ m ∧
− ∑c Bi
i ≠ r1 , r2 , r3 ,...rp
y iu2 x Br2 − ∑c Bi
i ≠ r1 , r2 , r3 ,...rp
y iu3 x Br3 −   − ∑c Bi
i ≠ r1 , r2 , r3 ,...rp
y iu p x Br p

∧ ∧ ^ ^
+ cu1 x Br1 + cu2 x Br2 + cu3 x Br3
+   + cu p x Br p

 ∧ ∧ ∧ ∧

= z − c Br1  y r1u1 x Br1 + y r1u2 x Br2 + y r1u3 x Br3 +   + y r1u p x Br p 
 
 ∧ ∧ ∧ ∧

− c Br2  y r2u1 x Br1 + y r2u2 x Br2 + y r2u3 x Br3 +   + y r2u p x Br p 
 

 ∧ ∧ ∧ ∧

− x Br3  y r3u1 x Br1 + y r3u2 x Br2 + y r3u3 x Br3 +   + y r3u p x Br p 
 
 ∧ ∧ ∧ ∧

−   − c Br p  y rpu1 x Br1 + y rpu2 x Br2 + y rpu3 x Br3 +   + y rpu p x Br p 
 
m ∧ m ∧ m ∧
− ∑
i ≠ r1 , r2 , r3 ,... rp
c Bi y iu1 x Br1 − ∑
i ≠ r1 , r2 , r3 ,...rp
c Bi y iu2 x Br2 − ∑
i ≠ r1 , r2 , r3 ,...rp
c Bi y iu3 x Br3 −  

m ∧ ∧ ∧ ∧ ∧
− ∑
i ≠ r1 , r2 , r3 ,...rp
c Bi y iu p x Br p + cu1 x Br1 + cu2 x Br2 + cu3 x Br3 +   + cu p x Br p

65
m ∧ m ∧ m ∧
= z − ∑ c Bi y iu1 x Br1 − ∑ c Bi y iu2 x Br2 − ∑ c Bi y iu3 x Br3
i =1 i =1 i =1
m ∧ ∧ ∧ ∧ ∧
−   − ∑ c Bi y iu p x Br p + cu1 x Br1 + cu2 x Br2 + cu3 x Br3 +   + cu p x Br p
i =1


( )

( )

(
z = z + cu1 − zu1 x Br1 + cu 2 − zu 2 x Br2 + cu 3 − zu 3 x Br3 +   + cu p − zu p x Brp )

( ) ∧

5.1.3 Optimality Condition:


The value of the objective function will improve if z > z

( )

( )

(
=> z + cu1 − zu1 x Br1 + cu 2 − zu 2 x Br2 + cu 3 − zu 3 x Br3 +   + cu p − zu p x Brp > z )

( ) ∧

( )

( )

(
=> cu1 − zu1 x Br1 + cu 2 − zu 2 x Br2 + cu 3 − zu 3 x Br3 +   + cu p − zu p x Brp > 0 )

( ) ∧

∧ ∧ ∧ ∧
But for non-degenerate case x Br1 , x Br2 , x Br3 , , x Brp > 0 . Hence we must have
(
(i) cu1 − zu1 > 0 )
(
(ii) cu2 − zu2 > 0 )
(
(iii) cu3 − zu3 > 0 )
Similarly (iv) cu p − zu p > 0 ( )
In general c j -z j >0

5.1.4 Criterion-1: (Choices of the entering variables into the basis):

(i) Choose the u 1 th column of A for which cu1 − zu1 is the greatest positive of
c j -z j , j=1,2,...,n.

(ii) Choose the u 2 th column of A for which cu2 − zu2 is the greatest positive of
c j -z j , j=1,2,...,n. , j≠u 1

(iii) Choose the u 3 th column of A for which cu3 − zu3 is the greatest positive of
c j -z j , j=1,2,...,n. , j≠u 1 , u 2

Similarly

(iv) Choose the u p th column of A for which cu p − zu p is the greatest positive of


c j -z j , j=1,2,...,n. , j≠u 1 , u 2 , u 3….. , u p-1

66
5.1.5 Criterion-2: (Choices of the out going variables form the basis):

xBr  xB 
(i) Choose xBr for which 1
= min i , yiu1 〉 0 
yr1u1 i  y 
 iu1 
1

xBr2  xB 
(ii) Choose xBr for which = min i , yiu 2 〉 0 
yr1u 2 i  y 
 iu 2 
2

xBr3  xB 
(iii) Choose xBr for which = min i , yiu 3 〉 0 
yr1u 3 i  y 
 iu 3 
3

Similarly
xBr p  xB 
(iv) Choose xBr for which = min i , yiu p 〉 0 
p
yr1u p i  y 
 iu p 

Remark: 5.1.1 : Remark 4.1.1-Remark 4.1.3 are also applicable here.



Remark 5.1.2: In calculation of x B1 (i = 1,2,....m) we replaced the corresponding column of
k by x Bi (column b) instead of only in the first column as was done by Agrawal & Verma
[I] and thus avoid unnecessary negative sign and hereby simplify the notation throughout
the chapter and henceforth.

5.2 The Combined Algorithm:


In this section, we develop our combined algorithm for solving LP problems.
The algorithmic steps are presented below.

Step 1: Define the types of the constraints and express the problem in its standard
form.

Step 2: Start with an initial feasible solution in canonical form and set up initial table.

Step 3: Use the inner product rule to find the relative profit factors c j as follows

c j = c j − z j = c j - (inner product of c B and the column corresponding to x j in the

canonical system).

Step 4: If all c j ≤ 0 (maximization), the current basic feasible solution is optimal and

stop. If there is a single c j > 0 , one variable replacement; Go to Step 6. Otherwise go

to Step 5.

67
Step 5:

Substep 1: Select the non basic variable with most and second most positive c j

to enter the basis.

Substep 2: Choose P out going variables from the basis by minimum ratio test.
If selected columns give more than one same minimum ratio, then choose
distinct rows.

Substep 3: Perform P basic variable replacement operations to get simplex


table.

Substep 4: Go to Step 4.

Step 6: Select the non basic variable to enter the basis.

Substep 1: Choose the out going variable from the basis by minimum ratio test.

Substep 2: Perform the pivot operation to get the table and basic feasible solution.

Substep 3: Go to Step 4.

Step 7: If any c j corresponding to non basic variable is zero, take this column as pivot

column (for alternative solution) and go to Step 6.

5.3 Mathematica Codes:

Now we shall present our combined program in programming language


Mathematica (Eugere, Wolfram). This program is written in Mathematica 5.2 for Students
version. In this program, we have used eight module functions- maketble[t_],
rowoperation[t_], morebasic[t_], onebsop[t_], alter[t_], morebsop[t_], main[morebasic_],

vinpt[m_,n_]. The function vinpt[m_,n_] has been used for taking inputs. This function will
ask the user to input number of rows, number of columns, number of greater than type
constraints, input row by row, right hand side constants, cost vector and type of each
constraint e.g. ‘l’ for less than type, ‘g’ for greater than type and ‘e’ for equality type
constraints respectively. Our program is case sensitive and minimizes the tedious work of
input data by generating slack or artificial variables. The function maketble[t_] is for
making tables and the function rowoperation[t_] performs all necessary calculations for
single variable replacement. The module function morebasic[t_]has been used for more
than one basic variable replacements in a single iteration. If the case arises that a simplex

68
table ends with only one positive c j , then to incorporate the problem with single variable

replacement we have introduced the function onebsop[t_] and this function controls all
necessary operations for single variable replacement. The function alter[t_] identifies
alternative (if any) solutions in either single basic or more than one basic variable
replacements. The module function morebsop[t_]does the primary works for using the
function morebasic[t_]. Finally the function main[morebasic_] calls all the functions
discussed above and controls the program.

5.3.1 The combined program in Mathematica (Eugere, Wolfram):

vinpt[m_,n_]:=Module[{},
For[i=1;str={},i• m,i++,
str=Append[str,InputString["Input type of constraints"]] ];
cstr={};
t=Table[ Input["Enter row elements"],{i,1,m},{j,1,n}];
tb=Transpose[t];rhs=Table[ Input["Right hand Constant"], {i,1,m}];
ceff=Table[ Input["Cost Vector"],{i,1,n}];tbcef=ceff;
For[ i=1;cstr={};bindx={},i• m,i++,
If[ StringMatchQ[ str[[i]],"l"]=True,
cstr=Append[cstr,Subscript[S,i ]];For[k=1;s={},k• m,k++,
If[i=k, s=Append[s,1],s=Append[s,0]] ];
tb=Append[tb,s];bindx=Append[bindx,Length[tb]] ;
ceff=Append[ceff,0];tbcef=Append[tbcef,0],
If[ StringMatchQ[ str[[i]],"g"]=True,
cstr=Append[cstr,Subscript[S,i ]];
cstr=Append[cstr,Subscript[A,i ]];For[k=1;s={};a={},k• m,k++,
If[ i==k,s=Append[s,-1];
a=Append[a,1],s=Append[s,0];a=Append[a,0] ] ];
tb=Append[tb,s];tb=Append[tb,a];
bindx=Append[bindx,Length[tb] ];
ceff=Append[ceff,0];
ceff=Append[ceff,-10^10];tbcef=Append[tbcef,0];
tbcef=Append[tbcef,-M],
cstr=Append[cstr,Subscript[A,i ]];

69
For[k=1;a={},k• m,k++,If[ i==k,a=Append[a,1],
a=Append[a,0] ]];tb=Append[tb,a];
bindx=Append[bindx,Length[tb]];
ceff=Append[ceff,-10^10];tbcef=Append[tbcef,-M] ];
] ];
For[j=n,j≥ 1,j--,cstr=Prepend[cstr,Subscript[X,j]] ];
tble=Transpose[tb];
Off[General::spell]
]
maketble[t_]:=Module[{},

  
For[j=1;coount={},j• m+n+pp,j++,
coount=Append[coount,j]];

fb "Cj","Basis","CjZj" ; fcj "","CB","Cj" ;
fr={"RHS","--","Z"};
For[i=1;cb={};tcbf={};cbv={};B={},i• m,i++,
For[j=1,j• m+n+pp,j++,
If[bindx[[i]]== coount[[j]],cb= Append[cb,cstr[[j]] ];
cbv=Append[cbv, ceff[[j]] ];
tcbf=Append[tcbf, tbcef[[j]] ]; B=Append[B,tb[[j]] ], ];
];fb= Insert[ fb,cb[[i]], i+2];
fcj=Insert[fcj,tcbf[[i]],i+2];fr=Insert[fr,rhs[[i]], i+2]; ];
fr=ReplacePart[fr,tcbf.rhs,-1]; B=Transpose[B];
For[ i=1;fbcjr={};cjbar={},i• m+n+pp,i++,
cjbar=Append[ cjbar, ceff[[i]]-cbv.Inverse[B].tb[[i]] ];
fbcjr=Append[ fbcjr, ( tbcef[[i]]-tcbf.Inverse[B].tb[[i]])
//Simplify ]; ];
tbfom=Prepend[ tble,cstr];
tbfom=Prepend[tbfom, tbcef];tbfom= Append[ tbfom, fbcjr];
tbfom2=Prepend[Transpose[tbfom],fb];
tbfom2=Prepend[tbfom2,fcj];tbfom2=Append[tbfom2,fr];hed++;
Print[" Table ",hed," "];
Print[];
Print[TableForm [ Transpose[tbfom2],

70
TableAlignments→ Center,TableSpacing->{1,3}]];
Print["----------------------------------------------------------------"];
Print[];
For[i=1;nofe=0,i• m,i++, If[tcbf[[i]]==-M,nofe=1]];
If[ Max[cjbar]>0,Print["Feasible Solution = ", tcbf.rhs],
Print["Solution Point"];
For[i=1;k=0,i• m+n+pp,i++,For[j=1,j• m,j++,
If[i==bindx[[j]],
Print[ cb[[j]], " = ", rhs[[j]]," (Basic Variable)" ];k=1 ] ];

  
If[k==1,,

 
Print[cstr[[i]], " = 0
(Non Basic Variable )" ] ];k=0];

If nofe0, Print "All C j  0 & Optimal Value  ",tcbf.rhs ,

Print "Though all Cj 0, but no feasible solution" ;
Off[General::spell]
]


 

  
morebasic[t_]:=Module[{},


If[ Max[cjbar]>0 ,p=u[1];For[j=1,j• 2,j++,

For i1;teta  , i m,i,If y i, p 0, teta Append teta,


rhs i
y i, p
, teta Append teta, 10^6 ; ;

If[Min[teta]≠ 10^6,rr=Position[teta, Min[teta]][[1,1]];rc[j]=Position[teta, Min[teta]],


Print["Ratio with "," " , p ," th column is not possible"
];s=1;pd=u[j];Goto["end"]];
r[j]=rr;p=u[2];];
If[r[1]==r[2]&&Length[rc[1]]>Length[rc[2]],r[1]=rc[1] [[2,1]]];
If[r[1]==r[2]&&Length[rc[1]]<Length[rc[2]],r[2]= rc[2][[2,1]]];
k=Transpose[{{y[[ r[1],u[1] ]],y[[ r[1],u[2] ]]},{y[[ r[2],u[1] ]],y[[ r[2],u[2] ]]}}];
rh={rhs[[ r[1] ]],rhs[[ r[2] ]]};






For[j=1,j• 2,j++,
kop=Transpose[ReplacePart[k,rh,j]];
1
rhs r j  Det kop ;
Det k
For[i=1,i• m,i++,If[i≠ r[1]&&i≠ r[2],rhs[[i]]=rhs[[i]]-
(y[[i,u[1] ]]*rhs[[ r[1] ]]+y[[i,u[2] ]]*rhs[[ r[2] ]]) ] ];
For[ i=1,i• m+n+pp,i++,yrep={y[[ r[1],i ]],y[[ r[2],i ]]};

71




For[j=1,j• 2,j++,kop=Transpose[ReplacePart[k,yrep,j]];

yy j 
1
Det k
Det kop ; ;

tble[[ r[1],i ]]=yy[1];


tble[[ r[2],i ]]=yy[2];
For[p=1,p• m,p++,If[p≠ r[1]&&p≠ r[2], tble[[ p,i]]=y[[p,i]]-
(y[[ p,u[1] ]]*yy[1]+y[[ p,u[2] ]]*yy[2]) ]]
];Label["end"]; ];]
rowoperation[t_]:=Module[{},



 
If[ Max[cjbar]>0 ,


For[i=1;teta={},i• m,i++,If[ tble[[i,pcol]]>0,

tetaAppend teta,

teta=Append[teta, 10^6] ];
rhs i
tble i, pcol
];
,

If[Min[teta]==10^6,Print["Ratio is not possible; Unbounded Solution"];


 
st=1;Goto["end"]];


rhs pro 

pro=Position[teta, Min[teta]][[1,1]];

 
rhs pro
tble pro, pcol
tble pro tble pro 
;
1
; For i1,im,i,
tble pro, pcol
If[i==pro,,rhs[[i]]=rhs[[i]]-tble[[i,pcol]]*rhs[[pro]];
tble[[i]]=tble[[i]]-tble[[i,pcol]]*tble[[pro]]; ] ], ];
Label["end"];
Off[General::spell]
]
onebsop[t_]:=Module[{},Print["One basic var replacement"];Print[];While[
Max[cjbar]> 0,
pcol=Position[cjbar, Max[cjbar]][[1,1]];rowoperation[tble];
If[st≠ 1,bindx=ReplacePart[bindx, coount[[pcol]],pro];
maketble[tble], Return[] ] ];
]
alter[t_]:=Module[{},For[i=1;nofe=0,i• m,i++, If[bindx[[i]]==10^10,nofe=1]];
nbindx=Complement[coount,bindx]; For[i=1;alt=0,i• Length[nbindx],i++,
If[ cjbar[[nbindx[[i]]]]==0,alt=1;

72
Print["Alternative Solution"]; pcol=nbindx[[i]];
cjbar=ReplacePart[cjbar, 10^6, pcol];rowoperation[tble];
If[st≠ 1,bindx=ReplacePart[bindx, coount[[pcol]],pro];
maketble[tble], Goto["lst"] ], ];Label["lst"]; ];
If[alt==0,Print["No Alternative Solution"]];
]
morebsop[t_]:=Module[{},tr=0;s=0;
Print["More than one basic var replacement"];Print[];
While[Max[cjbar]>0,
u1=Max[cjbar];u[1]=Position[cjbar, Max[cjbar]][[1,1]];
cjbar=ReplacePart[cjbar,0,u[1]];u2=Max[cjbar];
u[2]=Position[cjbar, Max[cjbar]][[1,1]];
If[u1>0&&u2>0,
cjbar=ReplacePart[cjbar,ceff[[ u[1] ]],u[1]];y=tble;morebasic[y];
If[st≠ 1&&s=0,For[i=1,i• 2,i++,
bindx=ReplacePart[bindx,coount[[ u[i] ]],r[i] ]]; maketble[tble],
Print["More than one basic var replacement not possible"];Return[]],
Print["After that more than one basic var replacement not possible"];
cjbar=ReplacePart[cjbar,0,u[1]];tr=1 ] ];
If[tr=1,cjbar=ReplacePart[cjbar,u1,u[1]];
onebsop[tble];alter[tble],alter[tble]];
]
main[morebasic_]:=Module[{},Clear["Context`*"];
m=Input["No of Rows"];n=Input["No of Columns"];
pp=Input["No of >= constraints"];hed=0;st=0;
vinpt[m,n];maketble[tble];
d=Input["Choose method \n'1' for one basic var \n '2' for more basic var"];
If[d=2,morebsop[tble],onebsop[tble];alter[tble]];
If[s=1,onebsop[tble];alter[tble]];
]
Clear[u,r,y]
main[morebasic];

73
5.3.2 Numerical Examples and Comparison:

In this section, we will compare the results obtained by our method with that of
Dantzig’s methods. And also show the differences between these methods with illustrative
numerical examples. Our method takes less iteration than Dantzig’s one basic variable
replacement method. The main short coming of Paranjape’s method is that if a simplex
table ends with one positive c j , then this method fails. Our method over comes this

problem easily.

Example 1:

This example is taken from Paranjape.

Max Z = − 15 x + 25 x + 15 x − 30 x + 10 x5 − 40 x7 − 10 x
1 2 3 4 9
s/t
1 1 1
− x1 + x 2 + x3 ≤ 0 ,
2 2 2
1 3 1
− x1 + x 2 − x3 ≤ 0
4 4 4
3 1 1
− x 4 + x5 + x 6 ≤ 0
4 4 4

1 1 1
− x 4 + x5 − x6 ≤ 0
2 2 2
x1 + x 4 + x 7 ≤ 100
x 2 + x5 + x8 ≤ 100
x3 + x 6 + x9 ≤ 60
xi ≥ 0

The above LP takes four iterations (excluding initial table) in Dantizig’s method
whereas it takes only two (excluding initial table) iterations in our method. Using our
program, we have to input 7, 9, 0 respectively to indicate the LP has 7 constraints with 9
variables and no greater than type constraints. If there exists any greater than type
constraints then input the number of those constraints. We have to input ‘l’ seven times to
indicate all constraints are less than type with ‘A’, the coefficient matrix, right hand side
constants ‘b’ and cost coefficient ‘C’. The program will generate required number on slack
variables.

74
We obtained the optimal solution of the above problem after four iterations
(excluding initial table) by Dantzig’s single variable replacement method. We solve the
same problem by our method and obtained the optimal solution after two iterations
(excluding initial table). We see our method reduces the number of iterations by 50%. The
optimal tables of one basic variable replacement method and our method are as follows.

Optimal table of one basic variable replacement method:

Remark 1: The Table number refers to the number of iterations.

Optimal table of our method:

Example 2:

We shall show the failure of Paranjape’s method. For the following LP, we see that
Paranjape’s method fails after one iteration because there exists only one positive c j at

that time as shown in Table 2. Whereas our method solves the same problem effectively
and the result is shown in Table 4.

75
Max Z = 2 x1 + 3 x 2 + 4 x3

s / t x1 + x 2 + x3 ≥ 5
x1 + 2 x 2 = 7
5 x1 − 2 x 2 + 3 x3 ≤ 9
x1 , x 2 , x3 ≥ 0

The second table of Paranjape’s method:

Optimal table of our method:

5.4 Solution of LP on a production problem of a garment


industry (Standard Group) using combined program:

Now, applying the above program to solve the production problem of the
garment industry (Standard Group) formed in section 1.8 of Chapter-1, we may
rearrange the computer solution in the following way:

Z= 5837.68

x 1 =0.0 x 2 =0.0

x 3 =0.0 x 4 =0.0

x 5 =0.0 x 6 =0.0 x 7 = 0.0

x 8 =0.0 x 9 =0.0 x 10 =0.0


x 11 = 0.0 x 12 =446.377 x 13 = 563.768

76
Illustrated Answer:

Fabric-1

Men’s long sleeve shirt:

Should not be produced the RMG items- Men’s long sleeve shirt

Men’s short sleeve shirt:

Should not be produced the RMG items- Men’s short sleeve shirt

Men’s long pant:

Should not be produced the RMG items- Men’s long pant

Men’s shorts:

Should not be produced the RMG items- Men’s shorts

Ladies long pant:

Should not be produced the RMG items- Ladies long pant

Ladies shorts:

Should not be produced the RMG items- Ladies shorts

Boys long pant:

Should not be produced the RMG items- Boys long pant

Boys shorts:

Should not be produced the RMG items- Boys shorts

Men’s boxer:

Should not be produced the RMG items- Men’s boxer

Men’s fleece jacket:

Should not be produced the RMG items- Men’s fleece jacket

Men’s jacket:

Should not be produced the RMG items- Men’s jacket

Ladies jacket:

Should be produced 446.377 pieces of RMG items- Ladies jacket

77
Boys jacket:

Should be produced 563.768 pieces of RMG items- Boys jacket

Maximum Profit

Maximum total profit $ 5837.68 only per day.

Note: If we solve the problem by Dantzis’s one variable replacement method it takes
10,636 iterations, which is very time consuming to solve by hand calculation. But by
applying our combined program we can easily solve these types of large scale real life
problems.

5.5 Solution of LP on Textile Mill Scheduling problem


using combined program:

Now, applying the above program to solve the Textile Mill Scheduling
problem formed in section 1.9 of Chapter-1, we may rearrange the computer solution in
the following way:

Z= 62286.45

x 11 =4181.82 x 12 =12318.18

x 21 =22000.00 x 22 =0.0

x 31 =0.0 x 32 =26100.0 x 33 = 35900.00

x 41 =0.0 x 42 =7500.0 x 43 =0.0


x 51 = 0.0 x 52 =26100.0 x 53 = 0.0

Illustrated Answer:

Fabric-1

Should be woven 4181.82 yards on dobbie loom

Should be purchased 12318.18 yards from another mill

78
Fabric-2

Should be woven 22000.00 yards on dobbie loom

Should not be purchased from another mill.

Fabric-3

Should not be woven on dobbie loom

Should be woven 26100.00 yards on regular loom

Should be purchased 35900.00 yards from another mill

Fabric-4

Should not be woven on dobbie loom

Should be woven 7500.00 yards on regular loom

Should not be purchased from another mill.

Fabric-5

Should not be woven on dobbie loom

Should be woven 26100.00 yards on regular loom

Should not be purchased from another mill

Maximum Profit

Maximum monthly total profit $ 62286.45 only.

Note: If we solve the problem by Dantzis’s one variable replacement method it takes
10,835 iterations, which is not possible to solve by hand calculation. But by applying our
combined program we can easily solve these types of large scale real life problem
problems.

5.6 Conclusion:
In this chapter, we compared the results obtained by our method with that of
Dantzig’s methods. And also show the differences between these methods with

79
illustrative numerical examples. Our method takes less iteration than Dantzig’s one basic
variable replacement method. The main short coming of Paranjape’s method is that if a
simplex table ends with one positive c j , then this method fails. Our method over comes

this problem easily. We obtained the optimal solution of example1 after four iterations
(excluding initial table) by Dantzig’s single variable replacement method. We solved the
same problem by our method and obtained the optimal solution after two iterations
(excluding initial table). We have seen that our method reduces the number of iterations by
50%. Moreover in example2 we have shown the failure of Paranjape’s method. In example
2 we have also shown that Paranjape’s method fails after one iteration because there exists
only one positive c j at that time as shown in Table 2. Whereas our method solves the

same problem effectively. That is why, we can easily say that our method is more effective
than any other methods.

80
Chapter 6

COUNTER EXAMPLES OF MORE THAN ONE BASIC


VARIABLES REPLACEMENT AT EACH ITERATION OF
SIMPLEX METHOD

6.1 Introduction:

We discussed the two and three basic variables replacement method of Paranjape and
Agrawal and Verma’s for solving linear programming problem (LP) in Chapter 4.

In this chapter we illustrate some numerical examples to compare the different method for
solving all kinds of linear programming problem replacing more than one basic variable at
each simplex iteration. For clarity we first solve the same example in graphical method
and also in usual simplex method of Dantzig. We also generalize the claim to more than
one basic variables replacement methods for solving (LP).

6.1.1 Numerical Example 1:

Maximize Z= -2x 1 +5x 2 +3x 3


Subjecto to 2x 1 +4x 2- x 3 ≤ 8
2x 1 +2x 2 - 3x 3 ≤ 7
x 1 - 3x 2 ≤ 2
4x 1 + x 2 + 3 x 3 ≤ 4
x1, x2 , x3 • 0

Solution of the above problem in usual simplex method:


Adding slack variables x 4 , x 5, x 6 , x 7 to the constraints we get the initial basic
feasible solution as the following table:
Table-1 (Initial Table)

cB Cj → -2 5 3 0 0 0 0 Constant
↓ X B↓ x1 x2 x3 x4 x5 x6 x7
b

0 x3 2 4 1 1 0 0 0 8
0 x4 2 2 -3 0 1 0 0 7
0 x5 1 -3 0 0 0 1 0 2
0 x7 4 1 3 0 0 0 1 4
− -2 5 3 0 0 0 0 Z=0
c j = cj − zj

Table-2

cB Cj → -2 5 3 0 0 0 0 Constant
↓ X B↓ x1 x2 x3 x4 x5 x6 x7
b

0 x2 1/2 1 -1/4 1/4 0 0 0 2


0 x5 1 0 -5/2 -1/2 1 0 0 3
0 x6 5/2 0 -3/4 3/4 0 1 0 8
0 x7 7/2 0 13/4 -1/4 0 0 1 2
− -9/2 0 17/4 -5/4 0 0 0 Z=10
c j = cj − zj

Table-3(Optimal Table)

cB Cj → -2 5 3 0 0 0 0 Constant
↓ X B↓ x1 x2 x3 x4 x5 x6 x7
b

0 x2 10/13 1 0 3/13 0 0 1/13 28/13


0 x5 48/13 0 0 -9/13 1 0 10/13 59/13
0 x6 43/13 0 0 9/13 0 1 3/13 110/13
3 x3 14/13 0 1 -1/13 0 0 4/13 8/13
− -ve 0 0 -ve 0 0 -ve Z=164/13
c j = cj − zj

82
Since in Table – 3 all cj-zj≤ 0, this table gives the optimal solution to the given
problem. Therefore the optimal solution to the given problem is
x 1= 0 , x 2 = 28/13 , x 3 = 8/13 , with Zmax =164/13.

Solution of example 6.1.1 in Paranjape’s Two –Basic Variables Replacement


Method:

We now want to solve the above problem by Paranjape’s method. The initial table of the
problem is as follows:

Table-1 (Initial Table)

cB Cj → -2 5 3 0 0 0 0 Constant
↓ x1 x2 x3 x4 x5 x6 x7
b
X B↓
0 x4 2 4 -1 1 0 0 0 8
0 x5 2 2 -3 0 1 0 0 7
0 x6 1 -3 0 0 0 1 0 2
0 x7 4 1 3 0 0 0 1 4
− -2 5 3 0 0 0 0 Z=0
c j = cj − zj

Here we see that x 2 replaces x 4 and x 3 replaces x 7 .

And the solution of the above problem is as follow:

Optimal Table

cB Cj → -2 5 3 0 0 0 0 Constant
↓ X B↓ x1 x2 x3 x4 x5 x6 x7
b

0 x2 10/13 1 0 3/13 0 0 1/13 28/13


0 x5 48/13 0 0 -9/13 1 0 10/13 59/13
0 x6 43/13 0 0 9/13 0 1 3/13 110/13
3 x3 14/13 0 1 -1/13 0 0 4/13 8/13
− -ve 0 0 -ve 0 0 -ve Z=164/13
c j = cj − zj

83
Since in the above table all cj-zj≤ 0, this table gives the optimal solution. Therefore
the optimal solution to the given problem is x 1 =0, x 2 =28/13, x 3 =8/13 with Zmax=164/13.

Calculations

4 −1
K= = 13
1 3

∧ X Br yr1u 2 1 8 −1 28
X Br1 = x2 = 1 1
= =
kX yr2 u 2 13 4 3 13
Br 2

∧ yr1u1 X Br 1 4 8 8
X Br 2 = x3 = 1 1
= =
k y X Br 13 1 4 13
r2 u1 2


 ∧ ∧

x B2 = x5 = xB2 −  y2u1 x Br1 + y2u 2 x Br2 
 
 28 8  59
= 7 −  2. − 3.  =
 13 13  13
Similarly

 28  110
x B3 = x6 = 2 −  − 3. + 0  =
 13  13

Calculation for the first column of the optimal table

∧ 1 2 −1 10
y11 = =
13 4 3 13

Note: since y11 = 2 corresponds to the pivot element 4 which is in the first column of K
 2
we replace the first column of K by  
 4
Similarly

∧ 1 4 2 14
y 41 = =
13 1 4 13


 10 14  48
y 21 = 2 −  2. − 3.  =
 13 13  13

 10  43
y 31 = 1 −  − 3. + 0  =
 13  13

Similarly we calculate other columns of the optimal table.

84
Solution of example 6.1.1 in more than one Basic Variable Replacement Method by
using combined program in programming language Mathematica:

Out Put: Table 1


 Cj 2 5 3 0 0 0 0 RHS
CB Basis X1 X2 X3 S1 S2 S3 S4 
0 S1 2 4 1 1 0 0 0 8
0 S2 2 2 3 0 1 0 0 7
0 S3 1 3 0 0 0 1 0 2
0 S4 4 1 3 0 0 0 1 4

Cj CjZj 2 5 3 0 0 0 0 0
-----------------------------------------------------------
Feasible Solution = 0
More than one basic var replacement
Table 2
 Cj 2 5 3 0 0 0 0 RHS
CB Basis X1 X2 X3 S1 S2 S3 S4 
5 X2 10 1 0 3 0 0 1 28
13 13 13 13
0 S2 48 0 0 9 1 0 10 59
13 13 13 13
0 S3 43 0 0 9 0 1 3 110
13 13 13 13
3 X3 14 0 1 1 0 0 4 8
13 13 13 13

Cj CjZj 118 0 0 12 0 0 17 164
13 13 13 13

 
-----------------------------------------------------------

 
Solution Point
X1  0 Non Basic Variable

 
28
X2  Basic Variable
13

 
8
X3  Basic Variable

 
13
S1  0 Non Basic Variable

 
59
S2  Basic Variable
13

 
110
S3  Basic Variable
13
S4  0 Non Basic Variable
 164
All Cj  0 & Optimal Value 
13
No Alternative Solution

85
6.1.2 Numerical Example 2:
Maximize Z= 3x 1 +2x 2
Subject to 2x 1 +x 2 ≤ 4
-3x 1 +5x 2 ≤ 15
3x 1 -x 2 ≤ 3
x1, x2• 0
Solution of the above problem in Graphical Method:

The solution space satisfying the given constraints and meeting the non negativity
restrictions x 1 , x 2 • 0 is shown shaded in figure below. Any point in this shaded region is
a feasible solution to the given problem.

x2

(0,4)

D(0,3) C(5/13, 42/13)

B(7/5,6/5)

O(0,0) A(1,0) (2,0) x1

Figure-1
Feasible region for example 6.1.1

86
The vertices of the convex feasible region OABCD are O(0,0), A(1,0),
B(7/ 5 , 6/ 5 ), C (5/ 13 , 42/ 13 ) and D(0,3).
The value of the objective function at these points are:
Z(0)= 0
Z(A) = 3.1+0 = 3
Z(B) = 3. 7/ 5+ , 2.6/ 5 = 33/ 5.

Z(C) = 3. 5/ 13 +2. 42/ 13 = 99/ 13


and Z(D) = 3.0 + 2.3 = 6

Since the maximum value of the objective function is 99/13 and it occurs at
C(5/13,42/13), the optimal solution to the given problem is x 1= 5/ 13 , X 2 =42/ 13 with
Zmax = 99/ 13

Solution of example 6.1.2 in usual simplex method:

Inserting the slack variables x 3, x 4, x 5 • 0 to the 1st, 2nd and 3rd constraints of
example 6.1.2 we first transform the example to standard form as follows:
Maximize Z= 3x 1 +2x 2
Subject to 2x 1 +x 2 +x 3 = 4
-3x 1 +5x 2+ x 4 = 15
3x 1 -x 2 + x 5 = 3
x 1 , x 2, x 3, x 4, x 5 • 0

Then we get the initial table as below:

Table-1 (Initial Table)

cB Cj → 3 2 0 0 0 Constant
↓ x1 x2 x3 x4 x5
b
X B↓
0 x3 2 1 1 0 0 4
0 x4 -3 5 0 1 0 15
0 x5 33 -1 0 0 1 3
− 3 2 0 0 0 Z=0
c j = cj − zj

87
Table-2

cB Cj → 3 2 0 0 0 Constant
↓ X B↓ x1 x2 x3 x4 x5
b
0
0 x3 0 5/3 1 - 2/ 3 2

1 18
0 x4 0 4 0 1
3 x1 1 - 1/ 3 0 0 1
/3 1

cj=cj-zj 0 3 0 0 -1 Z= 3

Table-3

cB Cj → 3 2 0 0 0 Constant
↓ x1 x2 x3 x4 x5
b
X B↓
2 x2 0 1 3/5 0 -2/5 6/5
0 x4 0 0 -12/5 1 13/5 66/5
3 x1 1 0 1/5 0 1/5 7/5
− 0 0 -9/5 0 1/5 Z=33/5
c j = cj − zj

Table-4 (Optimal Table)

cB Cj → 3 2 0 0 0 Constant
↓ X B↓ x1 x2 x3 x4 x5
b
2 0 42
2 x2 0 1 1/5 / 13 / 13
1 66
0 x5 0 0 -12/13 5/ 13 / 13
3 x1 1 0 5/13 -1/ 13 0 5
/ 13
cj=cj-zj 0 0 -101/65 -1/ 13 0 Z=99/ 13

Since in Table – 4 all the relative profit factors are non positive i.e all cj≤ 0, this
table give the optimal solution to the given problem. Hence the optimal solution is
x 1 = 5/13, x 2 = 43/13 With Zmax = 99/13.

88
Solution of example 6.1.2 in Paranjape [9]’s Two- Basic Variabls Replacement
Method:
We now attempt to solve the above problem following the two basic variables
replacement method of Paranjape.
The initial table of the problem is as follows:

Table-1 (Initial Table)

cB Cj → 3 2 0 0 0 Constant
↓ X B↓ x1 x2 x3 x4 x5
b

0 x3 2 1 1 0 0 4
0 x4 -3 5 0 1 0 15
0 x5 3 -1 0 0 1 3
− 3 2 0 0 0 Z=0
c j = cj − zj

u1 u2

There are two positive cj=cj- zj, viz c1- z1 and c2- z2 in the above table. The first
one being numerically larger between the two. We choose the 1st column of A as the u1th
column and the 2nd column as the u2th column. Then by minimum ratio rule (also as in
criterion 2) we see that x 2 replaces x 4 and x 1 replaces x 5 .
y r1u1 y r1u2 3 −1
Then K = = = 12
y r2u1 y r2u2 −3 5

Here we see that all the elements in the expression of K are not non-negative. But
according to the Paranjape’s method all the elements in the expression of K are to be non
negative. Paranjape also describe a way to overcome this difficulty. In this regard he
suggested to choose the v’th column of A by the alternative criterion such as choose the
column of A as the v’th for which cv’-zv’ is the greatest positive cj-zj;
j-1,2,1….n; j ≠ u1,u2.
But in the above example there is no such v’th column. So one can not move any
where to solve the above problem by replacing two basic variables at iteration. Paranjape
did not give any instruction to overcome this kind of difficulty.
So one cannot solve the above problem applying Paranjape’s method. But the usual
simplex method and the graphical method show that the above problem has an optimal
solution. Therefore Paranjape’s method fails here.

89
But we can solve the example 6.1.2 by using our combined program in
programming language Mathematica. This is our method is a combined method which
incorporates Dantzig and Paranjape.

Solution of example 6.1.2 by using our combined program in programming language


Mathematica:
Out Put:
Table 1

 Cj 3 2 0 0 0 RHS
CB Basis X1 X2 S1 S2 S3 
0 S1 2 1 1 0 0 4
0 S2 3 5 0 1 0 15
0 S3 3 1 0 0 1 3

Cj CjZj 3 2 0 0 0 0
-----------------------------------------------------------

Feasible Solution = 0
Table 2

 Cj 3 2 0 0 0 RHS
CB Basis X1 X2 S1 S2 S3 
0 S1 0 5 1 0 2 2
3 3
0 S2 0 4 0 1 1 18
3 X1 1 1 0 0 1 1
3 3

Cj CjZj 0 3 1
0 0
3
-----------------------------------------------------------

Feasible Solution = 3
Table 3

 Cj 3 2 0 0 0 RHS
CB Basis X1 X2 S1 S2 S3 
2 X2 0 1 3 0 2 6
5 5 5
0 S2 0 0 12 1 13 66
5 5 5
3 X1 1 0 1 0 1 7
5 5 5

Cj CjZj 0 0 9 0 1 33
5 5 5
-----------------------------------------------------------

33
Feasible Solution 
5

90
Table 4

 Cj 3 2 0 0 0 RHS
CB Basis X1 X2 S1 S2 S3 
2 X2 0 1 3 2 0 42
13 13 13
0 S3 0 0 12 5 1 66
13 13 13
3 X1 1 0 5 1 0 5
13 13 13

Cj CjZj 0 0 21 1 0 99
13 13 13

 
-----------------------------------------------------------

 
Solution Point
5
X1  Basic Variable

 
13

 
42
X2  Basic Variable
13
S1  0
S2  0
S3 

66
13
All Cj  0
Basic Variable

& Optimal Value 



Non Basic Variable
Non Basic Variable

99
13
No Alternative Solution

6.2 Conclution:
In this chapter we illustrated some numerical examples to compare the different
method for solving all kinds of linear programming problem replacing more than one basic
variable at each simplex iteration. For clarity we first solve the same example in graphical
method and also in usual simplex method of Dantzig. We also generalize the claim to
more than one basic variables replacement methods for solving (LP). But in the above
numerical example1 there is no such v’th column. So one can not move any where to solve
the above problem by replacing two basic variables at iteration. Paranjape did not give any
instruction to overcome this kind of difficulty. So one cannot solve the above problem
applying Paranjape’s method. But the usual simplex method and the graphical method
show that the problem has an optimal solution. Therefore Paranjape’s method fails here.
But we can solve the example 6.1.2 by using our combined program in programming
language Mathematica. This is our method is a combined method which incorporates
Dantzig and Paranjape. That is why; we can say that our method can solve any difficulties
for solving LP.

91
Chapter 7

CONCLUSION

In this research, we have generalized the simplex method of one basic variable
replacement by non basic variables to simplex method of more than one (P, where •P 1)
basic variables replacement by non basic variables, which has already been discussed in
chapter-5. We also developed a computer technique for solving LP problems of replacing
more than one basic variable by non-basic variables at each simplex iteration. It is also
applicable in the case where the Paranjape’s method stops in a table having only one basic
variable to be replaced. Our method incorporates with the usual simplex method to
overcome that problem. We compared the results obtained by our method with that of
Dantzig’s methods and also show the differences between these methods with illustrative
numerical examples that our method takes less iteration than Dantzig’s one basic variable
replacement method, which reduce the iteration time, labor as well as computational cost.

We can solve a linear programming problem by graphical method as well as


numerical method. We can use graphical method when the problem is in two dimensional.
For this method, it is necessary to plot the graph accurately which is very difficult and also
time consuming. To overcame this difficulties, in this thesis we developed a computational
technique using mathematica codes to show the feasible region of two-dimensional linear
programming problems and it also give the optimal solution. In usual simplex method, we
need to use artificial variables and have to apply 2 Phase simplex method or Big-M simplex
method when the set of constraints is not in canonical form. It needs many iterations which is also
time consuming and clumsy. But by applying our computational technique using mathematica
codes we can solve any types of problem easily, which has already been discussed in
chapter-2 with illustrative examples.

The practical purpose of optimization is to formulate a mathematical programming


model of real life problems, solve these and use the solution for different organizational
point of view. In this thesis, we first formulate two linear programming model for sizeable
large-scale real life LP problems, which involves a numerous amount of data, constraints
and variables. A small problem can be solve with the help of pencil and paper. But large-
scale real life problem can not be solved by hand calculations. To overcome the
complexities of large-scale linear programming (LP) problem, it requires computer-
oriented solution. Our computer technique easily overcomes these complexities.

Our computer techniques can solve any types of LP problems of any dimension
where the set of constraints is not in a conical form. Because simplex method is only
applicable when the set of constraints in a conical form (i.e., Ax ≤ b, ∀ x ∈ X). In that case
another method is necessary for solving the problem but our technique will be applicable
in both types of problem.

Our computer techniques also introduce a decision making rule that defines the
different variables and types of variable or a system such that an objective defined by the
decision maker is optimized. It also reduce the activities for expressing LP problem in
standard form by introducing slack or surplus or artificial variables where it was necessary
for solving LP problem.

Finally, we may conclude that the linear programming method along with our
computer program is a mighty method for large-scale real life optimization problem,
where it can be applied. To do this, one has to build the required mathematical
programming model of the problem and required computer program. Hence our computer
oriented solution procedure saves time & labor and it can solve the problem of any
dimension.

93
REFERENCES

Agarwal, S.C. and Verma, R. K., 3-Variables Replacement in Linear Programming


Problems, Acta Ciencia Indica, Vol.3, No.1, P 181-188, (1977)

Dantzig,G.B., Linear Programming and Extension, Princeton University Press,


Prinecton,N.J., (1962)

Eugere, D., Schaum’s Outline-Mathematica, Mc-Graw-Hill, Newyork., (2001)

Hadley, G., Linear Programming, Addison Wesley,Pub Co.,London, (1972)

Kambo, N.S. “Mathematical Programming Techniques”, Affiliated East-West press PVT.


LTD, New Delhi.(1984)
Kambo, N.S., Introduction to operations Research, MeGrew-Hill, Inc.New York, (1988)

Marcus, M. “A Survey of Finite mathematics”, Houghton Mifflin Co., Boston (1969),


p(312-316).

Paranjape, S.R., Simplex Method: Two Variables Replacement, Management Science,


Vol. 12, No. 1, P 135-141, (1965)

Swarup, K., CCERO (Belgium), 8(No-2), P 132-136, (1966)

Winston, W.L. “Operations Research: Applications and Algorithms”, International


Thomson Publishing, California, USA. (1993)

Wolfram, S., Mathematica, Addison- Wesley Publishing company, Menlo Park,


California, Newyork,(2000)

94
CHAPTER # 1

INTRODUCTION

95
CHAPTER # 2

LINEAR PROGRAMMING MODELS:


GRAPHICAL AND COMPUTER
METHODS

96
CHAPTER # 3

SIMPLEX METHOD AND COMPUTER


ORIENTED ALGORITHM FOR
SOLVING LINEAR PROGRAMMING
PROBLEMS

97
CHAPTER # 4

MORE THAN ONE BASIC VARIABLES


REPLACEMENT IN SIMPLEX
METHOD FOR SOLVING LINEAR
PROGRAMMING PROBLEMS

98
CHAPTER # 5

GENERALIZATION OF SIMPLEX
METHOD FOR SOLVING LINEAR
PROGRAMMING PROBLEMS

99
CHAPTER # 6

COUNTER EXAMPLES OF MORE


THAN ONE BASIC VARIABLES
REPLACEMENT AT EACH
ITERATION OF SIMPLEX METHOD

100
CHAPTER # 7

CONCLUSION

101