Anda di halaman 1dari 7

576

IEEE Transactions

on

Power Systems, Vol. PWRS-2, No. 3, August 1987

A Newton Optimal Power Flow Program For Ontario Hydro EMS


Gamal A. Maria

Ontario Hydro

J.A.

Findlay

Abstract
A Newton optimal power flow program was developed for the Ontario Hydro Energy Management System. Each iteration minimizes a quadratic approximation of the Lagrangian. All the equations are solved simultaneously for all the unknowns. A new technique based on linear programming is used to identify the binding inequalities. All binding constraints are enforced using Lagrange multipliers. The algorithm combines the fast convergence of the Newton technique with the speed and reliability of Linear programming. Most cases converged in three iterations or less.

Introduction
This paper describes the optimal power flow program developed for the Ontario Hydro Energy Management System. This program will be used mainly for active power loss minimization by adjusting dispatchable var sources and transformer taps. It could also be used for optimal active power scheduling, reactive power loss minimization, remedial action analysis, and other similar uses.

benchmark algorithm "The reduced gradient technique" in 1968 [3]. This was followed by other gradient techniques [4, 5]. A Quasi-Newton optimal power flow was developed by the General Electric Company [6). The production type optimal load flow was based on the augmented Lagranglan. The earlier version obtained the descent direction by a reduced Hessian which is a positive definite approximation to the Hessian. This was later replaced by the actual Hessian. The GE technique uses a combination of "major iterations" and "minor iterations". At major iterations the problem is transformed intb a quadratic subproblem. The subproblem is optimized at the minor iteration stage using the Quasi-Newton technique. The GE optimal power flow was found to be reliable and fast enough for offline applications. The Newton approach [2] was developed as an EPRI project. It replaced the minor iterations in the GE technique with a simultaneous solution of all the equations in all the Lagrangian unknowns. This technique is the fastest available and satisfies all the ideal OPF characteristics except for the lack of a reliable process for the identification of binding inequalities that does not require user

input

or

expertise.

Ontario Hydro conducted a detailed study to assess the possible savings in active transmission losses if an online optimal load flow program is used [1]. In the study, General Electric's optimal power flow program was used. The results show that up to 3.7% reduction in total system losses could be achieved, resulting in a $2.5 million annual saving.
The second stage of the study was to identify or develop an optimal power flow technique for online application in the Ontario Hydro Energy Management System. The characteristics of an ideal optimal power flow technique are [2]:
1.

After a fast review of the Newton approach, the rest of this paper will address some aspects of the Newton approach (coupled decoupled, the positive definiteness of the Hessian), propose a process for the identification of the binding inequalities and report some results and timing of the optimal power flow program.
Vs

The Newton Approach

Objective Function
The objective function, F, minimization takes the form
F
or
= =

fast solution time that varies approximately in proportion to the network size and is relatively

for

real

power

loss
(1)

independent of the inequality constraints

number

of

controls
the

or
-

P1
E I2R

where P1 is the slack bus power

2. 3.

rapid and consistent convergence Tucker (K-T) optimality conditions


process

to

Kuhn

absence of user intervention in the optimization

The objective function may be augmented by a quadratic term to limit the movement of the control variables. This will be discussed later.

4.

capable of handling all OPF problem definitions and all networks of practical size and kind.

Lagrangian Function
The Lagrangian function, L, for the Newton OPF is
L
=

Development of optimal power flow programs started in the 60's. Dommel and Tinney introduced their

I kpk (PK

PKO)

I
K

XqK

(QK

QKO)

(2)

A paper recomneneded and approved 326-3 the IEEE Power System Engineering Cominittee of the ITE'lE Power Engineering Society for presentation at the IEEE/PES 1986 Suimnner Meeting, Mexico City, Mexico, .Tuly 20 - 25, 1986. tManuscript submitted January 10, 1986; nade available for printing June 10, 1986.
SM

86

by

The summation (K) above is over all buses from 1 to N whose active power or reactive power is fixed. The inequality constraints on variables and functions (including dispatchable active and reactive power) are enforced with Lagrange multipliers as well in the form:
L

=F

IPK PK
-

KO)
+

I XqK

(OQK

QKO)

(3)

Printed in the U.S.A.

I x: (Xi

axi

Ay

x-:

0885-8950/87/0800-0576$01.00O1987 IEEE

577 The summation i above is over all functions variables in the binding inequality set.
x + 3x)

and
4

The lagrangian function L = F -

axi

Ay

kP2

(P2

2.0)

'

k0 P2

-1.0

(7)

is the linearization of the binding inequality function (Xi) at the end of the mth iteratioc. Y is the set of state and control variables. Xi is the applicable limit. Solution Conditions

The last two terms in the cost function, F, will try to keep the voltages close to the initial (desired) values and, in general, tend to reduce the number of controls operating at their limits. The Coupled Formulation
W AZ = -g

The K-T optimality conditions are:


1.
2.

(8)

All equality satisfied.

and

inequality

constraints

are

The projected gradient is zero.


Further reductions in the objective function can be achieved only if the constraints are violated.

0.19867 1 .97021 -1 .97021 0.19867 -1.01153 -19.89991

0.19867

2.0802

-1.97021 - 1.01153 [AV1 0.19807 -19.89991 A2 2.08020 - 2.99173 AV2 -2.99173 0 AIi P2 J
-0.01 -0.1987

3.
4.

The projection of the Hessian in the feasible region is positive definite.

L+0.0016J
=

-0.01

Solving the above equation results in:

Newton OPF Formulation If Z is a vector composed of subvectors Y, X, then differentiating L twice wrt Z leads to a symmetric matrix called W (2], and the Newton formulation is:

AV1
Both

0.1405 A02 = -0.0291 AV2

0.1453 AXp2 = 0.01 upper

expected.

V1

and

V2

violate

their

limits

as

WAZ

The Decoupled Approach The matrix W is decoupled by removing all the off diagonal entries coupling 0 and V. The decoupled formulation for V/Q is
W

= -g

(5)

where g is the gradient vector of first partial derivatives of L wrt Z, and AZ is a vector of Newton corrections in Z.

Az"

= -g

(9)

Coupled

vs Decope Formulation

L 2.0802

1.97021

-1.970211

2.08020J

FAV1

LAV2J

0.011 L-0.01

In this section and the following section, two aspects of the basic Newton formulation are explored. A simple two bus example (figure 1) is used to illustrate these aspects.
0.1

Solving the above equation will result in:


AV1

-00.0909

AV2

= -

0.0909

The unexpected values for the change in V1 and V2 is due to the fact that 02 is fixed and the equality constraint for P2 is not enforced in the V/Q iteration. In the V/Q iteration the minimized function is

0.9901 (V1

-V2)2

+ K V1V2 + 0.05 (V1 - 1.0)2

(10)

+ 0.05 (V2 -

1.0)2

-o<Qp

O-Q2

and K = 2 * 0.9901 * (1 - CosO2) If it was not for the last two terms, the resulting V1 and V2 would be equal to zero.
Figure (1)

The cost function


F
=

PI
=

0.05 (V1

1.0)2 + 0.05 (V2


are

1.0)2

(6)

The starting values


V1

1.0

V2

= 1.0

and 02

0.1005

Decoupling [H] only [2] (the entries in W corresponding to the differentiation of L wrt Y) will reduce the number of offdiagonal entries in [W] but will not decrease its size. The saving resulting from that will be more than offset by the required additional iterations. The decoupled approach will perform satisfactorily in some examples [2] and may be reliable for all cases if formulated differently. lhis approach was not pursued any further. ihe study recommended the use of the coupled formulation. The techniques proposed in the rest of the paper apply to both the coupled and decoupled formulations.

578
Positive Oefiniteness Eof Hessian

Now study the above example without the intentiona1ly added term:

(reactive) power loss. This may cause the non-positive definiteness of the Hessian matrix. One of the following two approaches may be followed to minimize their impact:
1.

0.05 (V1

1.0)2 + 0.05 (V2

--

1.0)2

The Lagrangian function used is


F
=

Pl - XP2 (P2 - 2.0)

(11)
2.

Minimize 12R (12x) for only the branches with positive resistance (reactance). lhis will not result in a large deviation from the case when all branches are considered because the negative resistances (reactances) are normally associated with high impedance branches where real (reactive) losses are very small.

The coupled formulation is:


1 .01153 AV1 1 .98020 0.19867 -1.97021 A02 0.19867 1 .97021 0.19867 -19.89991 -1 .97021 0.19867 1 .98020 - 2.99173 AV2 OJ L1 .01 153 -19.89991 -2.99173

Replace the negative resistance with a very small positive resistance. This can be done only for real power loss minimization.

LAOP2j
-i(Xi

Identification of the

Bindil_ngAJqua!lJi ti es

-0 .1987 =-0.01 .-+F0.0016l

'0.01

The binding inequalities are enforced by the addition of the term (the Newton approach section):
--

L 4~6
=

ax;

The only affected entries are W(l,l) and W(3,3) which are (0.1) less than the corresponding entries when the two above terms were present in the cost function. Solving the above equation results in:

ay

AY

AVI

*0.3347

AV2

-0.3297

A02

0.0665

The objective of this identification process is to identify the set of binding inequalities and compute the change in the state variables (AZ) that would have resulted if the set of binding inequalities was known at t-he beginning of the iteration. Assume that we are dealing with the first iteration and there are no binding inequalities. Only equality constraints are enforced. The Newton formulation would be:

The solution obtained correspond to a maximum not a minimum. This is due to the fact that the projection of the Hessian in the feasible region is not positive definite.

There are two approaches to ensure the positive definiteness of the Hessian projection without affecting the sparsity and without adversely affecting the convergence rate. They both involve the addition of the term.
(V 22 (Y --_ y to the cost function. The only difference between the two approaches is the value of Yo.

Wo

AZ0

-go

(12)

Solving the above equation will produce the changes to the state variables and X's associated with the equality constraints.
Now assume we knew that the function Xi will violate one of i ts limits. The lIagrangian function would have

)2

been changed to
L
=

Yo is the desired value for the control variables. This additional term will add a positive value to the diagonal terms of [H] and, if large enough, will ensure its positive definiteness. The term has the added use, if required, of limiting

Fiproapch:

Lo

kj (Xi

+ --AY

axi
ay

X1)
=

(13)
0.

where the initial value for Xi

the movement of the control variables away from the desired values and of reducing the number of control variables operating at their limit.

lhe Newton formulation would have taken the form:

WO

-A
0

A0

AZL

-.90

Second Approach: Yo is the state of the variable at the beginning of the iteration. It will be upddted every iteration. This term will add a positive value to the diagonal of [H]. The only difference is that this terml will not affecL the right hand side (-g). l his approach may be used when there is no requirement to limit the movement of the control variables. It can also be used to complement the first approach when the term added by the first approach is not large enough to ensure the positive definiteness of [H]. The addition of the second approach term will not affect the final result. Adding positive elements to the diagonal will impact the convergence rate. Such impact depends on the size of the positive elements and is more acceptable than the Impact of having non-positive definite Hessian.
It is worth noting, that negative resistances (reactances) will add a negative value to the diagonals of the Hessiian matrix when minimizing active

LAT

AX

xi

where A is a column containing the first partial derivatives of Xi wrt y. lhe original formulation will produce a solution AZO corresponding to AX i 0. lhe impact of a non-zero AXW on the state variables (AZ,) is easily obtained from the new formulation above.

Wo AZ
l his

A Aki

(15)

can be generalized to handle more than one inequality and will take the form:

WO AZI

=A AX

(16)

where A is a matrix consisting of one column per inequality function. This column (Ai) is the partial

579

derivative of this function wrt the state variables.


The objective of changing the X's associated with these inequalities to a non-zero value is to remove the identified violation. The amount of change AX is constrained by XLL <
O.9<a<1.1 a=1.O

At

AZI

0.95 V21.05

< XLH

(17)

where XLL and XLH are the amount by which each function has violated its lower and upper limits after the solution of the original formulation (12).

\ V2 X 1.0 02 0.1005

XLL/XLH

(X1 + A AZ 0)

(18) (19)

The above equation (17) can be reformulated as

XLL

X <t - A

W1AX< WO A A < XLH


<

P2 2.0 '2 <

AX is also limited to
-~ < AX1
0

if the associated function is at its upper limit, and


0
<

AXi :S

if the associated function is at its lower limit.


The matrix [At W-1 A] is a symmetric matrix. It can be computed by a number of solves equal to its size (the number of violated inequalities) to obtain [W;l A] and a number of sparse vector by matrix multiplications to obtain [A W-1 A]. It is worth noting that advantage can be taken from the fact that the right-hand sides in the above solves are sparse and only a few entries in the resulting solutions are required [9]. The above formulation is in a form suitable for linear programming. All X's will start in the basis at their zero limit. Violated functions will be brought in the basis and the corresponding k released. If a released k violates its limits, it will be brought back in the basis and the associated function will be The process will continue until all released. functions and their X's have satisfied their limits. The functions with X's not in the basis (having non zero value) are elements of the binding inequality set.
changes in the state variables (AZ, ) due to enforcement of these inequalities is computed A check for further violations may be using (16). performed. Any newly identified violations can be added to equation (19) and the process repeated until
The the
no

In the first iteration, the strategy followed in this program is to limit the set of violated functions in the first linear programming run to the violated control variables (V1, V2 and a if it was violated). Other strategies, ie, starting with the m most violated functions and/or variables, could be more efficient. After the impact of their enforcement is computed, all functions are checked for violation. All violated functions are added to the set under consideration for enforcement.
The only violated control variables are V2. The linear programming formulation is:

V1

and

*0.1906

p72735 7.0274

XVl1
XV2J

r-0.09061

LO.1952i
XV2
=

L7.0274 7.2738J
AV1 = -0.0920

L-O.0952 J

The linear programming process will move V2 into the basis. This will result in: -0.013 AV2 = -0.0952

Both V1 and V2 are within their limits. Only V2 need be constrained. The impact of enforcing V2 on the other state variables is computed
V1 = 1.0486 02 = 0.0856 V2 = 1.05 a = 1.0953

Ql

1.077

violations are detected satisfied.

or some other criteria

is

Checking all variables and functions for violation, indicates that Q1 has violated its upper limit. It is added to the set. The new linear programming formulation is:

To demonstrate the capabilities of this algorithm, a more complex bus network (Figure 2) is solved.
The Lagrangian function is: L
=

-0.19061
-0.1952

7.2735

7.0274

3.28761 Vl

0.0906

P1

XP2(P2

2.0)

0.05(a-1.0)2
-

7.0274 7.2738 -6.8069 XV2 -0.0952 -C -1.9879 JL 3.2876 -6.8069 1140.65 [XQI lhe linear programming process moves both V2 and Ql in the basis This will result in:

+ 0.05(V1

1.0)2 + 0.05(V2

1.0)2

XV2

-0.0148

XQI

-0.0018
AQ1
=

After the first iteration, with none of the inequality constraints enforced, the state variables take the values
V1
a
=

AV1 = -0.1101

AV2 = -0.0452

-1.9879

V1, V2 and Q1 are within their limits.


ihe impact of enforcing, V2 and Ql1 on the other state variables is computed:

1.1406 1.0952

V2

1.1452
0.9879

02

.0666

01

580
V1
a
=

1.0305
0.9218

V2 - 1.05

02 = 0.0953

Further checking of all variables indicates no This ends the first violations are present. iteration. It is worth noting that if V2 and Ql were enforced at the beginning of the first iteration, the above result would have been obtained. V2 and Ql are enforced for subsequent iterations. After each subsequent iteration, the solution is checked for violation of inequality constraints. No violations are identified. There is no need to run the linear programming process to update the binding inequality set. At the beginning of our analysis we assumed that we were dealing with the first iteration and none of the inequality constraints are enforced. In the above example we indicated that, the functions and variables identified in the first iteration are put in the binding inequality set and are enforced in the second iteration. If at the end of the second iteration:
1. 2.
An inequality which was identified in the first iteration no longer needs enforcement -orAn inequality which was not identified

very large values and the solution will be meaningless. Only two of them can be enforced at the same time, with the third free. A feasible solution is achieved when the free variable is within its limits. In the above example there are three combinations. If none of them results in a free variable within its limits, the problem is infeasible.

v1<v1' v

/9
-oo< P1
c CP

V2 <V2!z V2

P2

2 H

Figure 3
In a large network, the infeasibility problem consists of: 1. 2.

first iteration violate its limit.

in the

Identify the functions involved in a conflict.


an

Then the linear programming process must be repeated to update the binding inequality set. The approach recommended is:
1.

Differentiate between solution infeasibility and infeasibility that can be resolved by exchanging functions between the constrained and the free set.

Identify a starting set of functions to be processed by the linear programming function consisting of all the functions and variables identified in the previous iteration and all functions and variables not identified in the previous iteration which have violated their limits.
Remove the impact of the enforced inequalities from the matrix [W] and the incremental change (AZ). For this purpose, the columns/rows associated with the X's used to enforce inequalities were moved to the end of the matrix [W]. Adjusting (AZ) will require one forward and one backward solution.

3.

Identify the functions to be enforced and those that should be free.

The linear programming approach proposed here provides a very efficient solution to the above problems. Identification of Conflicting Variables:

2.

3.

Repeat the process employed for the first iteration to identify the binding inequalities.

conver_eice
Each major iteration is the same as a Newton iteration with known binding inequality set. Convergence characteristics of the Newton technique under these conditions is well known [2]. The convergence of the binding inequality set identification is guaranteed. This is due to the fact that under worst conditions, all inequalities will be included in the set processed by the l inear programming like technique. The binding inequalities will be identified and no further violations will be detected.

Available linear programming techniques [7, 8] compute sensitivity vector (S). Its elements are the sensitivity of the violated function (to be constrained) to changes in the linear programming base variables (X's and already constrained functions). Infeasibility is detected by checking the sensitivity of the violated variable to its k (one of the LP base variables). If it is very small, this means a very large k will be required to enforce this Function at its violated limit. The set of conflicting functions (already in the LP bases) are identified by the relatively large sensitivity values associated with them in the vector (S). From the above, it is clear that, the magnitude of the elements in the vector (S) can be used to identify infeasibility and identify the vareiables involved in the conflict.
a

Identification of Solution Infeasibility


Solution Infeasibility can be easily identified using:
1.

Ihe

Infeasibi li-ty:
to enforce when trying conflicting constraints. lo illustrate this point, we will use a simple example (Figure 3).
Infeasibility
results

corresponding to the conflicting set of functions.


The violated

sign

of

the

sensitivity
for
the

vector

elements

2.
3.

(low/high).

limit

incoming variable

In this example, it is clear that V1, V2 and Q2 not be enforced at the same time. If we try to enforce them, the 'X's associated with them will take
can

The enforced limits (low/high) of the conflicting set of variables.

From the above, we can identify the direction of change for each conflicting function in order to remove the violation of the free function. If all of

581 Lhem indicate a movement that will result a violating their enforced limits, infeasibility is identified along with If at least one functions involved. conflicting functions will move away from the limit, there is a chance of feasibility.

in them solution all the


of the enforced

1 3

5 cpu seconds for optimal ordering of W 5 cpu seconds to build/factorize/solve the equation WAZ = -g 160 * .05 cpu seconds for solves used to build the LP problem 2.0 cpu seconds for the linear programming
* *

Identification of enforced Variables


We reach this step if at least one constrained function will move away from its enforced limit. A trial and error process is used if more than one constrained function will move away from its enforced limit. One of these functions is released before the violated function is constrained. If the released function violates its limit, the above process is repeated. It will be enforced again after a different function is released. The maximum number of functions involved in such a conflict was four with an average number of trails of two. The linear programming iteration is fast, resulting in a very efficient computation process.

Conclusion
A practical optimal power flow program was developed. The new approach combines the fast convergence characteristics of the Newton technique and the speed and reliability of the linear programming technique to produce a fast and reliable optimal power flow All aspects concerning the basic Newton program. technique (coupled vs decoupled, positive definiteness of the Hessian) were discussed and any identified A fast, simple problems addressed and resolved. technique to identify infeasibility and resolve conflicts between constrained variables was proposed.

Acknowledgements
to like the authors would The acknowledge contributions by the members of the task force on the study on Optimal Voltage and Var Control in Ontario Hydro BES. In particular, the contribution by Messers M. El-Kady, E. Vaahedi, B. Bell, V. Wong, and J. Kim are greatly appreciated.

Results
The 230/500 kV Ontario Hydro network and an equivalent for the 115 kV and neighbouring systems was used to test the technique. The objective function is to minimize the active transmission losses. A single slack bus is used. The main attributes of the test network are: - Network: 381 buses, 552 branches - Controls: 62 dispatchable reactive sources and 24 controllable taps

References
1.

About twenty snap shots, obtained from the Ontario Hydro EMS, representing the above system operating under different conditions (equipment outages, light/heavy load, etc.) were used to test the proposed algorithm. A typical history of iterations for the test runs are shown in lable (1). Iter. No. 1
2 3

B.D. Bell, M.A. V.F. Carvalho, El-Kady, H.H. Happ, R.C. Burchett, and D.R. Vierath, "Assessment of Real Time Optimal Voltage Control", 85 SM 489-0 IEEE Summer Meeting, Vancouver BC, Canada, July 1985.
W.F.

2.

0.I. Sun, B. Ashley, B. Brewer, A. Hughes, and Tinney, "Optimal Power Flow by Newton Approach," IFEE Transactions on power Apparatus and Systems, Volume PAS-103, No. 10 pp 2864-2880
October 1984.
H.W. Dommel, and W.F. Tinney, "Optimal Power Flow Solutions,", IEEE Transactions on Power Apparatus and Systems, Volume PAS-87, No. 10, pp 1866-1876, October 1968.
J.W. Carpentier, "Differential Injections Method, a general method for Secure and Optimal Load Flows,", Proc 8th PICA Conference, Minneapolis MN, 1973, pp 255-262.

Size of

LP Problem 37 153 154 0 0

No. of

Bind. Ine2u. 17 27 26 26 26

Max AV/At 0.0448 0.0027 0.0000

Max* P/Q Mismatch 0.1662 0.0372 0.0002

Total*
Losses

System
2.6319 2.5401 2.5439

3.

4.

* 100 MVA base

lable 1 Discussion of the Results Columns 2 and 3 in Table 1 show the results of the This binding inequality identification process. process was required only in the first iteration. The starting set had 37 control variables that violated their limits. Seventeen were identified as binding inequalities. When all variables and functions were checked for violation, an additional 116 functions were identified and added to the set. The linear programming process identified 27 as binding inequalities. when these 27 binding inequalities were enforced, an additional variable violated its limit and was added to the set. The process identified 26 as binding inequalities. No further violations were identified in this or subsequent iterations when these 26 functions were enforced. The optimal power flow in three iterations. The program converged computation time was about 30 cpu seconds using a Univac 1180 (about 25% of the equivalent CPU Power

5.

J. Peschon, D.W. Bree, and L.P. Hajdu," Optimal Solutions Involving System Security", Proc 7th PICA Conference, Boston - Mass, 1971, pp 210-218.
R.C. Burchett, H.H. Happ, D.R. Vierath, and K.A. Virgau, "Development in Optimal Power Dispatch," Proc 12th PICA Conference, Philadelphia - Penn, 1981. B. Stott, and J.L. Marinho, "linear programming for power-system Network Security Applications," IfEE Transactions Power Apparatus and Systems volume PAS-98, No. 3, May/June 1979 pp 837-848.

6.

7.

8.

B. Stott and E. Hobson," Power System Security Control Calculation using linear programming Part I,", F 77 517-0 IEEE Summer Meeting, Mexico

9.

required in [1]). This computation time is divided as follows between the different tasks:

City, July 1977. W.F. Tinney, V. Brandwajn and S.M. Chan, "sparse IEEE [ransactions power vector methods,"

Apparatus and systems, Volume PAS- 104, Feb. 1985 pp 295--301.

582
Gamal A. Maria graduated In 1968 from Carlo University (Egypt) with a B.Sc. in Electrical Engineering. He continued as a teaching and research assistant at Carlo University for 2 years. In 1970, he joined Queen's University (Canada) where he obtained a M.Sc (1971) and Ph.D. (1974). He continued with Queen's University as a post--doctorate until 1915.
he has In 1975, he joined Ontario Hydro. Since been Supervising Engineer in the Power System Operation Division. He was responsible for the development of the state estimation program, operator load flow, activation of reserve simulation, and on-line determination of conservative stability limits.

J.A. Findlay was born in Canada in 1946. He received the B.A.Sc. degree in Engineering Science and M.Eng. degree in Electrical Engineering from the University of Toronto, Ontario in 1969 and 1913.

1991,

He joined Ontario Hydro in 1969 and is currently with the Power System Operations Division. Since 1981 he has been head of the section responsible for design and development of on .line Application programs for use on the Ontario Hydro Energy Management System.
Mr. the

Findlay is a Registered Professional Engineer province of Ontario.

in

My concerns and hence my questions deal with the Lagrangian function itself w.r.t. differences between the formulation in the paper and in the EPRI work [2]. The Lagrangian terms in the EPRI formulation [2] include X* (P(y)terms for dispatchable generation within limits, where P(y) is a full PO) second order function and P0 is a free control variable. In the paper under discussion, however, these terms are included only for dispatchable generation at a limit, i.e., when it is treated as a binding constraint. Therefore, the dispatchable generation, P, would never appear as an explicit control variable in the set Y. Is my interpretation correct? In addition, the P(y) quantity is represented in the paper as a first order Taylor series for generation at a limit. Does this approximation introduce unacceptable errors and convergence difficulties in the case of reactive generation? Constraint violations are treated as infeasible in the paper. An alternate and normally realistic approach, particularly for lineflows and voltages, is to treat such violations as penalty terms added onto the Lagrangian function. The EPRI work included a second order function of violations in the Lagrangian equation. In fact, such a set of violations could form an objective function for the LP step. Have the authors considered this alternative formulation and can they comment on it?

been anticipated by many people.

Discussion Walter L. Snyder, Jr. (Leeds and Northrup Co., North Wales, PA): The authors are to be congratulated on an interesting and well-written paper. A marriage between the Newton approach and linear programming has

Manuscript received August 4,

1986.

Hassan Ghoudjehbaklou and Suri Vemuri (Harris Corporation, HCCD, Melbourne, FL): The authors are to be congratulated on a well-written paper on real power loss minimization problem by iterating between a quadratic programming master problem and a linear programming subonproblem. Being involved in the development of similar models forhave line implementation of OPF in an EMS environment, the discussers special interest in this paper. We would appreciate the authors' comments on the following:

1) How is the linear objective function of the subproblem set up? If the objective function of the subproblem is different from that of the master problem, the solution may oscillate. 2) Techniques based on sensitivities of the binding constraints may
be used in place of the "trial and error" identification of the enforced constraints. This will reduce the chance of oscillation in moderately constrained OPF problems. 3) The subproblem with binding constraints provides an approximate solution for the control variables. How is this approximate solution included in the master problem? prob4) A major problem with the Newton approach of solving OPF lem is that it tends to move most of the controls from their startto shift ing point. In an EMS environment this may not be desirable all the available controls. Instead one would like to assign certain

ratios are absent. Our decoupled Newton OPF version converges the twobus problem of Fig. 1 in three iterations. The same version solves problems virtually identical to the 381-bus ones reported in the paper in under 20 s on the VAX 1I1/780 without any form of system-dependent tuning. A problem of similar sparsity but ten times bigger would take not much B. Stott and 0. Alsac (Power Computer Applications, Mesa, AZ): The more than ten times as long. main subject of the paper-identification of binding constraints in optimal power flow-is currently of great interest, and the linear program- Manuscript received August 5, 1986.

priorities to controls and move the controls accordingly. Manuscript received August 5, 1986.

efficiency. The networks that are now to be solved in modern EMS's frequently have one-, two-, or more thousand buses and large numbers of control variables. A typical OPF problem will then have hundreds of binding constraints. These will comprise limits on controls such as generator voltages, shunt devices, transformer taps, generator MW's, phase shifts, dc lines flows, and sheddable loads, as well as limits on dependent quantities such as load bus voltages, generator MVAR's, line flows, active and reactive reserves, interchanges and group transfers, and so on. The nonsparse binding-constraint identification problems to be built and solved may easily become, say, ten times larger than those described in the paper. Since their computational complexity is somewhere between the square and cube of problem size, the quoted timing for them of 10 CPU then increases to between 1000 and 10,000 s. (For comparison purposes, is relative could the authors please estimate how fast their Univac to a VAX 11/780?). In addition, the effort of solving such large nonsparse problems will almost double on many computers due to the need for extended precision arithmetic and other error-limiting measures. The incorporation of contingency constraints can significantly increase the nonsparse problem size further. The formulation covered in the paper is a subset of the loss minimization problem and a subset of the general OPF problem. What problems do the authors anticipate with their Hessian-matrix and bindingconstraint identification techniques in handling other objectives, controls, and constraints efficiently, including discreteness? Our own experience has been that the overriding development effort has been to implement many of these other features. The decoupled case in the paper is neither formulated nor analyzed between properly, and in itself has no practical bearing on the choice coupled and decoupled approaches. Our present experience on a variety of large systems is that well-implemented decoupled Newton OPF has characteristics similar to those of fast decoupled power flow, and can be the method of choice for many power systems, particularly those typically encountered in EMS applications where very poor line X/R

binding-constraint identification technique will severely

only a trial-and-error scheme that finds a feasible solution, there must be some arbitrariness about which constraints it decides to enforce and back off. How can this scheme always consistent with the lambdas in the nonlinear problem, which are the true test of whether the relevant Kuhn-Tucker optimality conditions are satisfied? The first ideal OPF characteristic quoted in the paper's introduction is "fast solution time that varies approximately in proportion to network size and is relatively independent of the number of controls or inequality constraints." The only nonlinear programming OPF approach that currently achieves this goal is the Newton method introduced in [2J. Preservation of matrix sparsity throughout is the main factor. One of our main concerns is that for larger OPF problems, the paper's nonsparse

ming (LP) technique introduced brings a welcome new idea to it. However, since the technique and its analytical foundation are not clearly explained in the paper, its compliance with basic optimization principles requires critical scrutiny. The objective function of the LP solution is never mentioned: a process with no explicitly formulable objective is not LP. Assuming that instead the identification of binding constraints is

bei

affect overall

1180

small

Anda mungkin juga menyukai