Anda di halaman 1dari 8

876

IEEE Transactionson Power System. Vol. 9. No. 2 May 1994 .

A Direct Nonlinear Predictor-Corrector Primal-Dual Interior Point Algorithm for Optimal Power Flows
Yu-chi w u Atif S. Debs Student Member Senior Member School of Electrical Engineering Roy E. Marsten School of Industrial and Systems Engineering

Georgia Institute of Technology Atlanta, GA 30332, U.S.A.


Abstract A new algorithm using the primal-dual interior point method with the predictor-corrector for solving nonlinear optimal power flow (OPF) problems is presented. The formulation and the solution technique are new. Both equalities and inequalities in the OPF are considered and simultaneously solved in a nonlinear manner based on the Karush-Kuhn-Tucker (KKT) conditions. The major computational effort of the algorithm is solving a symmetrical system of equations, whose sparsity structure is fixed. Therefore only one optimal ordering and one symbolic factorization are involved. Numerical results of several test systems ranging in size from 9 to 2423 buses are presented and comparisons are made with the pure primal-dual interior point algorithm. The results show that the predictor-corrector primal-dual interior point algorithm for OPF is computationally more attractive than the pure primal-dual interior point algorithm in terms of speed and iteration count. Keywords: Optimal power flow, Interior point method, Nonlinear programming, Sparsity techniques, Predictorcorrector method.

where
X

: the set of state variables including power

fix) g(x) h(x) hU,h i xu,xl

: : : :

generations (Pg and Q ) bus voltages (V and ,, ti), tap ratios of transformers(t and $), etc., a scalar function representing the operation performance of a power system, power flow equations, functional inequalities of the power system, the upper and lower bounds of inequalities, the upper and lower bounds on variables, x.

1 Introductioq .
The optimal power flow procedure consists of methods of determining the optimal steady state operation of an electrical power generation-transmission system, which simultaneously minimizes the value of a chosen objective function and satisfies certain physical and operating constraints. It is a typical nonlinear programming (NLP) problem and can be mathematically expressed as

This paper was presented at the 1993 IEEE Power Industry n Computer Application Conference held i Scottsdale, Arizona, May 4 - 7. 1993.

In the past three decades, various optimization techniques were proposed to solve the nonlinear OPF problem expressed as (1). They range from improved mathematical techniques to more efficient problem formulations [ll. Several references 11-31 provide overviews of solution methods in existence. These available mathematical techniques can be categorized as (a) gradient techniques, (b) successive quadratic programming (SQP) techniques, (c) Kuhn-Tucker nonlinear programming (NLP) techniques, and (d) successive linear programming (SLP) techniques. The gradient techniques [8,21] were the first approaches proposed to solve OPF problems. Despite their mathematical rigor, these approaches exhibit slow convergence, especially zigzagging near the optimum. The SQP approaches [9-111, proposed in the 1 8 ' , use the 90s second-order derivatives to improve the convergence rate of the gradient approaches. Their modeling is based on the Quasi-Newton process, in which the approximation of the Hessian matrix of the Lagrangian function is used to overcome the difficulties encountered in QP problems. However, as in all Quasi-Newton methods, the reduced Hessian, built iteratively, is dense, which may make these methods too slow as the number of control variables becomes very large. The NLP approaches, based on the Kuhn-Tucker techniques, attempt to solve OPF by meeting the Kuhn-Tucker optimality conditions directly. Although they were proposed as early as 1962, few are either reliable or fast. Until Sun e al. [5]proposed a Newton approach t combining with Kuhn-Tucker, quadratic, and advanced sparsity techniques, people began to re-evaluate this family of methods. The attraction of Sun's algorithm is that with special techniques the sparsity structure of the system

0885-8950/941$04.00 0 1993 IEEE

877

is not affected by the penalty function which is used to enforce the binding constraints, and the problem can be decomposed into an active-power subproblem and a reactive-power subproblem. The challenge in this algorithm development is to identify binding inequalities, therefore some heuristics are involved. The SLP approaches [4,6,7] are based on the linearization of OPF constraints and the objective function. These techniques are mostly applied to the active-power rescheduling problems, in which the controllable quantities are to be scheduled to comply with transmission-system line-loading limits, and perhaps other transfer restrictions. The central modeling issue is the linear P-0 representation of network equations. An incremental linear model is adopted, and the dual simplex algorithm is used. The reasons for choosing the dual simplex algorithm are that the dual method has a simpler and less time-consuming initialization, and that the storage required is less than that of primal problem. Due to the fact that LP approaches provide solutions jumping from one vertex to another, during the OPF procedure a successive refinement of cost curves into smaller segments is needed to overcome the discontinuities . Recently, due to the efficiency of the newly developed interior point methods for solving large-scale linear programs (LP), they became candidates for many applications. Although the earliest literature of interior point methods appeared in the early 1950's and they have been formally studied in detail by Fiacco and McCormick [19] since then, the big breakthrough of interior point methods was accomplished in 1984 when Karmarker [l8l announced his polynomial-time algorithm for LP. The theoretical foundation for interior point methods consists of three crucial building blocks 1141: Newton's method for solving nonlinear equations and hence for unconstrained optimization, Lagrange's method for optimization with equalities, and Fiacco & McCormick's barrier method for optimization with inequalities. Among the many variants of interior point methods, the primal-dual interior point method [14-19,221 proves to be the most elegant theoretically and the most successful computationally for LP. In [20], a different formulation based on the primaldual interior point method is proposed for power system state estimation problems. In this paper we present a new different algorithm using the primal-dual interior point method and the predictor-corrector to directly solve OPF problems in a nonlinear manner based on KKT conditions. Also, numerical comparison results are provided to demonstrate the superiority of this new presented algorithm. In the next section, the pure primal-dual interior point algorithm (PDIPA) for OPF and the predictor-corrector primal-dual interior point algorithm (PCPDIPA) for OPF are described in detail. The implementation and computational issues are discussed in section 3. Section 4

presents numerical comparison results using the new presented algorithm and PDIPA. Conclusions and future research are made in section 5.

2 Direct Nonlinear Interior Point Aleon hms for OPF 't


Although several papers 114-17,221 were published in the past few years to address the formulation and the performance of the primal-dual interior point method for LP problems, the use of the primal-dual interior point method to directly solve large-scale nonlinear systems is still in the development stage. In this section, two different formulations based on the primal-dual interior point method for solving the nonlinear OPF problems are derived and discussed in detail. They are the pure primaldual interior point algorithm for OPF, and the predictorcorrector primal-dual interior point algorithm for OPF. Consider the general form of the OPF problem (11, and the transformed OPF problem- (2) by introducing slack variables sh, ssh, and sx.

The nonnegativity conditions on (x-xl) and those slack variables in (2) can be treated by appending the logarithmic barrier functions to the objective,

(3)
where n and rn are the numbers of state variables, x, and inequality constraints respectively, and the barrier parameter p is a positive number that is enforced to decrease towards zero iteratively. Based on the Fiacco & McCormick's theorem [19], as j tends towards zero, the t solution of the subproblem, x(p), approaches x*, the solution of (2). The resultant Lagrangian function of the subproblem with barrier functions, therefore, is

(4) where y, yh, ysh and yx are Lagrangian multipliers (dual variables) for constraints (2.a), (2.b), (2x1, and (2.d) respectively. The stationary point of (4) is, thus, the optimal

878

solution of the subproblem, which satisfies Karush-Kuhnfirst-order conditions: Tucker (KKT)

= pe

(8.e)

vxLp = vf(X)- g ( X f y +Vh(X)Tyh+ Y x -P(x-xi>-'e V


= O (a)
VshLp=Yh+Ysh-&e vs, L p v s xL ,

= Ysh - @ z e = Y x -&e

=o =o

(b)

(c)

v yL,

= -g(x>

= O (d) = O (e) (5)

where equations (8.b)-(8.e) are the approximations of the complementary slackness conditions and equation (8.a) represents the set of dual constraints. If p=O, then (8.bH8.e) correspond to the ordinary complementary slackness conditions. By taking the first derivatives of (5.e145.h) and (81, the following symmetric system of equations is obtained:

vys,,Lp = h ( x ) + s h - h

vy x L p = x + s x - x u vyh = Sh +Ssh -hu +hi L,

=o 0 = o (g)
= O (h)

where e=[l,...,llT, X=diag(xl ,...,Xn), Sh =diag(shl,..., b ) , s Ssh=diag(Ssh 1,.. .,Ss h m 1 SX =diag(sx 1,.. .,Sx n 1. These , nonlinear equations are then solved by the Newton's method. The new approximation to the new minimizer, (x, Sxl Sh, SshlZ,YIYxIYhlJ'sh) is determined by (61,
xk+l

=x k + a h
k

s y = s,"+ aAs,
sh+aAsh

k+l ssh
Zk+l

= ssh

ahsh

=Z k

+ aA2

where the scalar step size a is chosen only to preserve the , nonnegativity conditions on (x-xi), sx, sh, sSh, z yx, ysh, and (YhVsh). Instead of taking several Newton steps to converge to the optimal point of the subproblem with fixed p, at every iteration p is then reduced and the problem is relinearized. This is the main feature of the presented algorithm different from conventional SLP/SQP-based algorithms. Depending on the approximations used in the Newton's method, two variants of the primal-dual interior point algorithms for OPF can be derived. They are discussed in the following subsections.
Pure Primal-Dual Interior Point Alp o h (PDIPA)

By proper transformation and adding one extra equation


z= ~O(-XI e, (7)

where H is the Hessian matrix of the Lagrangian function. Because in (9) the barrier parameter p appears in the right hand side of equations, the factorization of the matrix on the left hand side of (9) is not affected by p. This feature makes the predictor-corrector method easily be incorporated into the algorithm, deriving the predictorcorrector primal-dual interior point algorithm which will be discussed in next subsection. The outline of the PDIPA is as the following: Step 0 : Initialization Choose a proper starting point such that the nonnegativity conditions are satisfied. Step 1 : Compute the barrier parameter, p. Step 2 : Solve the system of equations (9). Step3: Determine the step size, a, and update the solution. Step 4 : Convergence test If the solution meets the convergence criterion, optimal solution is found, otherwise go back to step 1. f i e Predictor-Corrector Primal-Dual Interior Po& Algorithm for OPF (PCPDIPA) In PDIPA, only the first-order linear terms are modeled. However, rather than applying the Newton's method to KKT conditions to generate correction terms to the current estimate, we can substitute the new point into KKT conditions directly, yielding

the nonlinear equations (5.a)-(5.d) and (7) can be re-written as (a>=;,Vf(X)-Vg(x)Ty+Vh(x>Tyh+yx-z =O (8.a)

*s h (yh + ysh >e (cl* sshyshe


(b)

= pe = pe

(8.b) (8.c) (8.d)

(4;,SxYxe =

= pe

879
&?, A.?,, A.?), ,GSh,Ai, Ajx,Ajh, Ajsh are then used to approximate the nonlinear terms in the right hand side of (ll),and to dynamically estimate p. Once the estimates of nonlinear terms in the right hand side of (11) and p are determined, the actual new search direction (Ax, Asx, Ash, ASsh, Az, Ay, Ayx, Ayh, AYsh) are obtained by (11) The outline of this algorithm can be stated as the following: Step 0 : Initialization Choose a proper starting point such that the nonnegativity conditions are satisfied. Step 1: Solve the system of equations (12). Step2: Compute the barrier parameter, p, and the estimated nonlinear terms. Step 3 : Solve the system of equations (11). Step4: Determine the step size, a, and update the solution. Step 5 : Convergence test If the solution meets the convergence criterion, optimal solution is found, otherwise go back to step 1.

where A x , Asx,AY,, AYh, AY&, Ash, Assh and A are z diagonal matrices having elements of Ax, As,, Ay,, Ayh, Aysh, Ash, As, h, and Az, respectively. The following symmetrical system is obtained:
.-z-'(X-x,) 0 0 0 0 0 0 -I
0

0
0

s-,'y, 0 I 0 0 0

0 0 I 0 0 0 0

0 0 1 I O 0 1 0 0 0 0 0 0 0 I 0 -VhT 0 0 0

In this section, several implementation issues associated with the algorithms presented in section 2 are discussed in detail: the adjustment of the barrier parameter and the determination of the Newton step size, the stopping criterion, accuracy, the choice of the starting point, and sparse matrix techniques.

The Adjustment of the Barrier Parameter and the Determination of the Newton's Step Size
The major difference between (9) and (11) is the presence of the nonlinear terms AXAZe, ASh(AYh+AYsh)e, ASxAYxe, and ASshAYshe in the right hand side of (11). These nonlinear terms cannot be determined in advance, therefore (11)only can be solved approximately. In order to estimate these nonlinear terms, Mehrotra [15] suggests first solving the defining equations for the primal-dual affine directions: Based on the Fiacco and McCormick's theorem, the barrier parameter p must approach zero as the iterations progress. The primal-dual method itself suggests how p should be reduced from step to step. For linear programming problems [14-17,221, the value of p is made proportional to the duality gap, the difference between the primal and dual objective functions. The duality gap of the nonlinear problem (2) defined as gap = YT&)+ YhT [hu -@)I + YL(h, - 4)

+y,T (xu- x) + zT (x -XI)

(13)

is a positive quantity if the primal and dual variables meet all the primal and dual constraints and is zero at the optimum point. However, due to the fact that the primal and dual variables are not feasible, the value of gup may not be positive. We, therefore, use the complementary gap to approximate the duality gap gap* = (Yh + ysh)Ts h + y$$&
&h

+ Z T (x - XI)

(14)

= h, - hl -Sh

- ssh

and by following Lustig [161 we choose

880

for the pure primal-dual interior point algorithm. For the predictor-correctorprimaldual interior point algorithm,

The Degree of Accuracy


When the solution satisfies conditions (21) and (E),

(16) where

gap* =gap and the degree of accuracy of the final solution is in order of .In practice, q =lo the solution is accurate
at least to its 6-th digit.

Point
Although a strictly feasible initial point is not mandatory for the algorithms, the initial point still needs to meet nonnegativity conditions (2.e). In our implementation, the initial state vector, x, can be chosen as a flat start or as the middle point between the upper and lower bounds, while the initial values of the slack variables can be chosen arbitrarily within their bounds. To determine the initial values of the dual variables we follow Mehrotras method [13]. We define

where And the Newtons step size, a,is determined as

11.1 is the

norm. Then for each j=l, ...,n,

z j = Vfj + 5, (Yx ) j = 5, (yX )=-2vfj, z j =-vfj, I

if Vfj > 5,
ifvfj<-<,
if 0 l V f j < 5 ,

a = min I0.9995 a*, 1.0)


where

(19)

z j =Vfj
z j = 5,

+e.

(yx)j=C,

(25)

(%)j - (ssh)j --(sh)j &j ( b x ) j (&sh)j (.bsh)j (20) ----(Yx)j (Ysh)j (Ysh+Yh)j -5) (AYx)j (AYYsh)j (AYsh+AYh)j Azj for those A X j<O, (ASx)j<O, (ASsh)j<O, (ASh)j<o, (Ayx)jd, (AYsh)j<O,and (Afi+AYsh)jd, AZj<O,. The constant, 0.9995, k chosen to prevent nonnegative variables from being zero. Because of this feature, the logarithmic barrier functions are continuous and differentiable. Moreover, during the algorithm, there is no need to evaluate any barrier function, In(*).

a = ( - - (x-x!>j *---

(y*)j=Vfj+{, if -C<Vfj<O.

and we Setyo=e,fio4,ysh0=15~e.

SDarsitv Techniaues
In the above algorithms, the major computational effort is solving the symmetric system of equations (9) or (11).Since they are sparse systems, the sparse techniques are used to save memory spaces and to improve speed. Several nice features are inherited in (9) or (11). First, similar to the Newtons OPF algorithm the sparse structure of the submatrix

The Stoppine Cnterrpn


The iteration procedures are terminated as the relative complementary gap and the mismatches of KKT conditions are sufficient small, i.e., for some user defined parameters, and &2
g: [

. .

-YT]

(26)

lthe largest mismatch of K q I .c2 where dobj is the dual objective value defined as

(22)

can always be preserved same as the network incidence matrix by rearranging the submatrix into a 4x4 block structure. Second, the nonzeros in Vh for line flow constraintswill not create new fill-ins in (26). Therefore, the optimal ordering and symbolic factorization need to be performed only once in (26) based on the 4x4 block structure. Recently, many advanced sparse techniques are successfully used in many large-scale symmetric systems. In our algorithms we implement a Rothbergs [12] factorization method designed for high-performance

881

workstations. By exploiting the memory hierarchy, it provides better performance than conventional ones.
4. Test Results

In our OPF model, all voltage magnitudes and all voltage angles except for swing bus voltage angle are modeled as bounded variables. Both real and reactive power flow equations are included. The optimal ordering scheme used in our implementation is the multiple minimum external degree algorithm. The algorithms were implemented in FORTRAN and compiled with -03 option. Several different systems ranging in size from 9 to 2423 buses are tested to evaluate the performance of the algorithm. Two basic types of OPF problems are tested: cost minimization and loss minimization. The cost minimization OPF problem is to minimize the total cost of power generations, while the loss minimization OPF is to minimize the transmission losses. Quadratic cost curves are used in cost minimization problems. Also, for 9-bus to 244-bus test systems two sets of constraints are considered: the one consisting of only power flow equations and simple 2-sided bounds on variables, and the one consisting of both power flow equations and line flow constraints and simple 2-sided bounds on variables. All testing is done on the Sun Sparc 1 workstation. Table 1lists the specifications of these test cases. Table 1: Specifications of Test cases Case Name puses ILines D-Controls ICurve 9-bUS I 9 1 9 1 3 1 3

of the largest power mismatch (in pu.) is less than q=103, and the absolute value of the largest dual constraint mismatch is less than &3=104. The results of iteration counts required to solve 9-bus to 244-bus systems are reported in Table 2 and Table 3 reports CPU times of all OPF runs, in which the numbers inside the parentheses are, respectively, the iteration counts and CPU times by PDIPA. Table 4 lists the numbers of binding constraints for those test cases with line flow constraints. Table 3: CPU Times - PCPDIPA vs. PDIPA (seconds) without with line flow constraints line flow constraints min min min min cost losses cost losses 9-buS 0.53 (0.70) 0.53 (0.64) 0.60 (0.73) 0.65 (0.75) . . . . . , 30-bus 1.33 (1.43) 1.29 (1.40) 2.20 (2.46) 1.65 39-bus 1.68 (2.24) 1.83 (1.89) 2.23 (2.77) 2.16 118-bus 5.05 (6.85) 5.23 (6.86) 7.27 (9.68) 6.38 . . 44-bus 116.82 (24.80)117.74 (27.261117.87 (26.29)119.08 (27.9 Case Name

Table 4 Numbers of Binding Constraints for Cases with Line Flow Constraints

II Case I

min cost

min loss

II

2423-bus

I 2423 I 3069 I

46
221

221

Table 2: Number of Iterations -- PCPDIPA vs. PDIPA

constraints

constraints

. .

..

..

30-bus 9(11) 9(15) 39-bus 118-bus 10(17) 244-bus I 12f23) I

8(10) 10 (12) IO (16) 13f241 I

12(16) lO(16) 13(22) 13f231 I

8(12) 10 (14) 11 (17) 131231

1I

In order to see how the PCPDIPA responds to small changes in the constraints, two runs were performed based on the 244-bus system, one for cost minimization and the other for loss minimization. The changes made in the first run are 1%decrease and increase respectively on voltage upper and lower bounds and 5MW decrease and increase respectively on line flow upper and lower bounds. In the second run, 2% changes are made on voltage limits and lOMW changes on line flow bounds. The required iterations and CPU times and the numbers of binding constraints are reported in Table 5. Table 5: Iterations, CPU times, and Numbers of Binding Constraints for 244-bus System with Changes on Bound Limits

There are total 24 cases for testing the performance of PCPDIPA. Among these test cases, two are based on a 2423-bus system. Each case is then solved by PCPDIPA and PDIPA for comparison purpose. The iterations in the algorithm are terminated when t h e relative complementary gap is less than q=104, the absolute value

I min loss min cost iterations: 13. CPU: 17.39sec literations: 14 .CPU 1 9 . 3 1 ~ ~

882

The results of Table 5 show that the PCPDIPA solves the changed cases as effectively as it solves the original cases. Table 6 lists the convergence process of the loss minimization OPF run for the 244-bus system with line flow constraints. After 13 iterations, the duality gap is less than lo4 ,the relative duality gap is less than the norms of primal and dual constraints are less than lo5. Table 6: Convergence Process of Loss Minimization for 244-bus System

The results in Tables 2, 3, 7 show that the PCPDIPA provides considerably better performance in solving nonlinear OPF problems than PDIPA. This is due to the fact that in PCPDIPA the second-order terms are considered with only little additional computational effort involved (one additional forward/backward substitution at every iteration).

5. Conclusions and Future Research


A new algorithm and a new formulation using the primal-dual interior point method with the predictorcorrector for the nonlinear optimal power flow problems have been presented in detail. The computational efficiency is due to the sparsity of the Hessian of the Lagrangian with respect to both primal and dual variables. The functional inequality constraints are directly treated in this algorithm. Only one optimal ordering and one symbolic factorization are needed, and one numerical factorization and two forward/backward substitutions are involved at every iteration. Since the second-order terms are considered, the convergence of PCPDIPA is faster than that of PDIPA. The computational experience reported here shows that the algorithm is attractive. Further work on the current algorithm falls under two categories. First, there is the need to address such key refinements as infeasibility detection, hot starting, decomposition, constraint relaxation, all of which are related to the nature of this approach - being inherently a barrier function optimization approach. Second, the algorithm will have to be extended to the challenging problems of security constrained OPF problems, OPFbased external equivalencing, discrete controls, hydrothermal coordination, etc.. Due to the structure of system (ll),it is possible to embed the decoupling schemes into the presented algorithm, like the conventional power flow or the Newton's OPF. Moreover, partial refactorization or compensation method is also applicable to improve the performance. Since the primal-dual interior point method converges quickly in the early stages of iterations and slowly near optimal solution, the changes in the values of variables in the latter stages are small. Therefore, partial refactorization or compensation method can be applied in the latter stage of iterations to reduce the computational effort.

0.600E+01 -0.133E+05 0.504E+01 -0).129E+04 o.494E+01 -0.392E+02 0.488E+01 -0.790E+01 0.466E+01 0.165E+01 0.452E+OI 0.301E+01 0.446E+01

0.134E+05 0.100E+01 0.130E+04 0.100E+01 0.442E+02 0.110E+01 0.128E+02 0.143E+01 0.299E+01 0.113E+01 0.150E+00 0.374E+oO 0.658E+00

0.677E+Ol 0.125E+01 0.763E+00 0.146E+00 0.257E-01 0.502E-02 0.708E-01 0.414E-02 0.165E+00 0.595E-01 0.148E+00 0.435E-01 0.524E-01

0.178E+03 0.100E+01 0.187E+02 0.253E-03 0.900E+00 0.800E-02 0.139E+00 0.386E-03 0.107E+oO 0.320E-03 0.468E-01 0.383E-03 0.103E-01

11 12 13 4**

0.428E+01 0.442E+Ol 0.439E+01 0.442E+Ol 0.442E+Ol 0.442E+Ol 0.442E+01


0.442E+01

0.442E+Ol

0.286E-01 0.371E-01 0.688E-02 0.455E-02 0.839E-03 0.222E-04 0.409E-05 0.155E-06 0.287E-07

0.283E-02 0.170E-01 0.811E-03 0.117E-01 0.113E-03 0.178E-03 0.108E-05 0.962E-06 0.346E-05

0.994E-07 0.895E-03 0.244E-07 0.1%E-03 0.300E-08 0.174E-04 0.146E-10 0.159E-05

References
Two other cases based on a 2423-bus system were tested for performance comparison between the PCPDIPA and PDIPA. The iterations and CPU times are reported in Table 7. Table 7: Iterations & CPU Times for 2423-bus System
[l] B.H.Chowdhury and S.Rahman, "A Review of Recent

[2]

[3] 11 4

min cost I min loss PCPDIPA I PDIPA IPCPDIPA I PDIPA Iterations 26 43 18 ICPU Time 279.85sec 370.15sec 221.87sec k65.31sec

Advances in Economic Dispatch," IEEE Trans. on Power Systems, vo1.5, No.4, pp.1248-1259, Nov. 1990. J.Carpentier, "Towards A Secure and Optimal Automatic Operation of Power Systems," pp.2-37, PICA 1987. B.Sttot, et al, "Security Analysis and Optimization," Proc. of IEEE, Vo1.75, No.12, pp. 1623-1644, Dec. 1987. B.Sttot, et al, "Review of Linear Programming applied to Power System Rescheduling," pp.142154, PICA 1979.

883

D.I.Sun, et al, "Optimal Power Flow by Newton Approach," IEEE Trans. on Power Apparatus and Systems, Vol.PAS103, N0.10, pp.2864-2880, Oct. 1984. B.Sttot and J.L.Marinho, "Linear Programming for Power-System Network Security Applications," IEEE Trans. on Power Apparatus and Systems, Vol.PAS-98, No.3, pp.837-848, May/June 1979. 0. Alsac, et al, "Further Developments in LP-Based Optimal Power Flow," IEEE PES Winter Meeting, Paper 90 WM 011-7 PWRS, Atlanta, Jan. 1990. H.W.Domme1 and W.F.Tinney, "Optimal Power Flow Solutions," IEEE Trans. on Power Apparatus and Systems, Vol.PAS-87, No.10, pp.1866-1876, Oct. 1968. T.C.Giras and S.N.Talukdar, "Quasi-Newton Method for Optimal Power Flows," international Journal of Electrical Power & Energy Systems, Vo1.3, No.2, pp5964, Apr. 1981. J.Nanda, et al, "New Optimal Power-Dispatch Algorithm Using Fletcher's Quadratic Programming Method," IEE Proc., Vol. 136, Pt. C, No. 3, pp.153-161, May 1989. R.C.Burchett, et al, "Quadratically Convergent Optimal Power Flow," IEEE Trans. on Power Apparatus and Systems, Vol.PAS-103, No.11, pp.32673275, nov. 1985. E.Rothberg and A.Gupta, "Efficient Sparse Matrix Factorization on High-Performance Workstations-Exploiting the Memory Hierarchy," ACM Trans. on Mathematical Software, vo1.17, No.3, pp.303-314, Sept. 1991. S.Mehrotra, "On Finding a Vertex Solution Using Interior Point Methods," Technical Report 89-22, Dept. of Industrial Engineering and Management Science, Northwest University, Evanston, IL (1990) R.Marsten, et al., "Interior Point Methods for Linear Programming: Just Call Newton, Lagrange, and Fiacco and McCormick!" Interfaces 20:4 July-August 1990 (pp.105-116). S.Mehrotra, "On the Implementation of a (PrimalDual) Interior Point Method," Technical Report 90-03, Dept. of Industrial Engineering and Management Sciences, Northwestern University, USA, March 1990. I.J.Lustig, et al., "Computational Experience with a Primal-Dual Interior Point Method for Linear Programming," Technical Report SOR 89-17, Oct. 1989. R.D.C.Monteiro and I.Adler, "Interior Path Following Primal-Dual Algorithms. Part I: Linear Programming, part 11: Convex Quadratic Programming," Mathematical Programming 44 (1989) 27-66. N.Karmarkar, "A New Polynomial-Time Algorithm for Linear Programming," Combinatorica 4 (4) (1984) 373395. A.V .Fiacco and G .P.McCormick, N o n l i n e a r Programming: Sequential Unconstrained Minimization Techniques, John Wiley & Sons, New York, 1968. [20] K.A.Clements, et al., "Treatment of Inequality Constraints in Power System State Estiimation," IEEE

PES Winter Meeting, paper 92WM 111-5 PWRS, 1992. [21] J.Carpentier, "Contribution to the Economic Dispatch Problem" (in French), Bull. Soc. Franc. Elect., Vol.8, pp.431-447, Aug. 1962. [22] I.J.Lustig, et al., "On Implementing Mehrotra's Predictor-Corrector Interior Point Method for Linear Programming," Rutcor Research Report, Rutgers University, RRR #26-90, May 1990,.

Biographies
Yu-Chi Wu (Student Member, IEEE) was born in Taiwan, the Republic of China. He received the Electrical Engineering Diploma from the National Kaohsiung Institute of Technology, Taiwan in 1984. Since 1988, he has been a graduate student in the School of Electrical Engineering, Georgia Institute of Technology, working towards the Ph.D. degree. He was a research assistant at National Sun Yat-Sen University, Taiwan from 1986-1988 and worked for Pacific Gas and Electric Company during the summers of 1989, 1990, and 1991, and for Energy Management Associates, Inc. during the summer of 1992. Currently, he is a research assistant at Georgia Institute of Technology. His primary research interests are in the areas of optimization and planning of electrical power systems, including optimal power flow, security analysis, unit commitment, maintenance scheduling, production simulation, load forecasting. Atif S. Debs (Senior Member, IEEE) is a professor of Electrical Engineering at the Georgia Institute of Technology, Atlanta, GA since 1972 where he became the co-founder of the Georgia Tech Electric Power Program. H e obtained his S.B., S.M., and Ph.D. degrees in 1964,1965, and 1969 respectively, at MIT, Cambridge, Massachusetts. His fields of interest are in the areas of control center applications and power system planning including state estimation, static and dynamic security assessment, production simulation, artificial neural network applications, and others. He is the author of a 1988 textbook entitled, Modern Power System Control and Operation: A Study of Real-Time Power Utility Control Centers as well as many technical papers in the above fields. He is an active officer in both the Power Engineering and Control Society.

Roy E. Marsten received his Ph.D. in Operations Research from UCLA in 1971. He taught at Northwestern University , MIT, and the University of Arizona before moving to Georgia Institute of Technology. His expertise is in computational optimization. He shared, with David Shanno and Irvin Lustig, the 1991 Beale/Orchard-Hays Prize for Excellence in Computational Mathematical Programming awarded by the Mathematical Programming Society.

Anda mungkin juga menyukai