Anda di halaman 1dari 12

Power flow study

From Wikipedia, the free encyclopedia

Jump to: navigation, search In power engineering, the power flow study (also known as load-flow study) is an important tool involving numerical analysis applied to a power system. Unlike traditional circuit analysis, a power flow study usually uses simplified notation such as a one-line diagram and per-unit system, and focuses on various forms of AC power (ie: reactive, real, and apparent) rather than voltage and current. It analyzes the power systems in normal steady-state operation. There exist a number of software implementations of power flow studies. In addition to a power flow study, sometimes called the base case, many software implementations perform other types of analysis, such as short-circuit fault analysis and economic analysis. In particular, some programs use linear programming to find the optimal power flow, the conditions which give the lowest cost per kilowatthour delivered. Power flow or load-flow studies are important for planning future expansion of power systems as well as in determining the best operation of existing systems. The principal information obtained from the power flow study is the magnitude and phase angle of the voltage at each bus, and the real and reactive power flowing in each line. Commercial power systems are usually too large to allow for hand solution of the power flow. Special pupose network analyzers were built between 1929 and the early 1960s to provide laboratory models of power systems; large-scale digital computers replaced the analog methods.

Contents
[hide]
y y y y

1 Power flow problem formulation 2 Newton-Raphson solution method 3 Power flow methods 4 References

[edit] Power flow problem formulation


The goal of a power flow study is to obtain complete voltage angle and magnitude information for each bus in a power system for specified load and generator real power and voltage conditions.[1] Once this information is known, real and reactive power flow on each branch as well as generator reactive power output can be analytically determined. Due to the nonlinear nature of this problem, numerical methods are employed to obtain a solution that is within an acceptable tolerance.

The solution to the power flow problem begins with identifying the known and unknown variables in the system. The known and unknown variables are dependent on the type of bus. A bus without any generators connected to it is called a Load Bus. With one exception, a bus with at least one generator connected to it is called a Generator Bus. The exception is one arbitrarilyselected bus that has a generator. This bus is referred to as the Slack Bus. In the power flow problem, it is assumed that the real power PD and reactive power QD at each Load Bus are known. For this reason, Load Buses are also known as PQ Buses. For Generator Buses, it is assumed that the real power generated PG and the voltage magnitude |V| is known. For the Slack Bus, it is assumed that the voltage magnitude |V| and voltage phase are known. Therefore, for each Load Bus, both the voltage magnitude and angle are unknown and must be solved for; for each Generator Bus, the voltage angle must be solved for; there are no variables that must be solved for the Slack Bus. In a system with N buses and R generators, there are then 2(N 1) (R 1) unknowns. In order to solve for the 2(N 1) (R 1) unknowns, there must be 2(N 1) (R 1) equations that do not introduce any new unknown variables. The possible equations to use are power balance equations, which can be written for real and reactive power for each bus. The real power balance equation is:

where Pi is the net power injected at bus i, Gik is the real part of the element in the bus admittance matrix YBUS corresponding to the ith row and kth column, Bik is the imaginary part of the element in the YBUS corresponding to the ith row and kth column and ik is the difference in voltage angle between the ith and kth buses. The reactive power balance equation is:

where Qi is the net reactive power injected at bus i. Equations included are the real and reactive power balance equations for each Load Bus and the real power balance equation for each Generator Bus. Only the real power balance equation is written for a Generator Bus because the net reactive power injected is not assumed to be known and therefore including the reactive power balance equation would result in an additional unknown variable. For similar reasons, there are no equations written for the Slack Bus.

[edit] Newton-Raphson solution method


There are several different methods of solving the resulting nonlinear system of equations. The most popular is known as the Newton-Raphson Method. This method begins with initial guesses

of all unknown variables (voltage magnitude and angles at Load Buses and voltage angles at Generator Buses). Next, a Taylor Series is written, with the higher order terms ignored, for each of the power balance equations included in the system of equations . The result is a linear system of equations that can be expressed as:

where

P and Q are called the mismatch equations:

and J is a matrix of partial derivatives known as a Jacobian:

The linearized system of equations is solved to determine the next guess (m + 1) of voltage magnitude and angles based on:

The process continues until a stopping condition is met. A common stopping condition is to terminate if the norm of the mismatch equations are below a specified tolerance. A rough outline of solution of the power flow problem is: 1. Make an initial guess of all unknown voltage magnitudes and angles. It is common to use a "flat start" in which all voltage angles are set to zero and all voltage magnitudes are set to 1.0 p.u. 2. Solve the power balance equations using the most recent voltage angle and magnitude values. 3. Linearize the system around the most recent voltage angle and magnitude values 4. Solve for the change in voltage angle and magnitude 5. Update the voltage magnitude and angles 6. Check the stopping conditions, if met then terminate, else go to step 2.

[edit] Power flow methods


y y y

NewtonRaphson method Fast-Decoupled-Load-Flow method GaussSeidel method

[edit] References
1. ^ J. Grainger and W. Stevenson, Power System Analysis, McGraw-Hill, New York, 1994, ISBN 0-07-061293-5

GaussSeidel method
From Wikipedia, the free encyclopedia Jump to: navigation, search

In numerical linear algebra, the GaussSeidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a linear system of equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either diagonally dominant, or symmetric and positive definite.

Contents
[hide]
y y y y

y y y

1 Description o 1.1 Discussion 2 Convergence 3 Algorithm 4 Examples o 4.1 An example for the matrix version o 4.2 Another example for the matrix version o 4.3 An example for the equation version 5 See also 6 References 7 External links

[edit] Description

Given a square system of n linear equations with unknown x:

where:

Then A can be decomposed into a lower triangular component L*, and a strictly upper triangular component U:

The system of linear equations may be rewritten as:

The GaussSeidel method is an iterative technique that solves the left hand side of this expression for x, using previous value for x on the right hand side. Analytically, this may be written as:

However, by taking advantage of the triangular form of L*, the elements of x(k+1) can be computed sequentially using forward substitution:

Note that the sum inside this computation of xi(k+1) requires each element in x(k) except xi(k) itself. The procedure is generally continued until the changes made by an iteration are below some tolerance.

[edit] Discussion

The element-wise formula for the GaussSeidel method is extremely similar to that of the Jacobi method. The computation of xi(k+1) uses only the elements of x(k+1) that have already been computed, and only the elements of x(k) that have yet to be advanced to iteration k+1. This means that, unlike the Jacobi method, only one storage vector is required as elements can be overwritten as they are computed, which can be advantageous for very large problems. However, unlike the Jacobi method, the computations for each element cannot be done in parallel. Furthermore, the values at each iteration are dependent on the order of the original equations.

[edit] Convergence
The convergence properties of the GaussSeidel method are dependent on the matrix A. Namely, the procedure is known to converge if either:
y y

A is symmetric positive-definite, or A is strictly or irreducibly diagonally dominant.

The GaussSeidel method sometimes converges even if these conditions are not satisfied.

[edit] Algorithm
Inputs: A , b Output: Choose an initial guess repeat until convergence
(0)

to the solution

for i from 1 until n do

for j from 1 until i-1 do

end (j-loop) for j from i + 1 until n do

end (j-loop)

end (i-loop) check if convergence is reached

end (repeat) Gauss-Seidel is the same as SOR (successive over-relaxation) with

= 1.

[edit] Examples
[edit] An example for the matrix version

A linear system shown as

is given by:

and

We want to use the equation

in the form

where:
and

We must decompose triangular component

into the sum of a lower triangular component :

and a strict upper

and

The inverse of

is:

Now we can find:

Now we have

and

and we can use them to obtain the vectors

iteratively.

First of all, we have to choose algorithm will perform. We suppose:

: we can only guess. The better the guess, the quicker the

We can then calculate:

As expected, the algorithm converges to the exact solution:

In fact, the matrix A is diagonally dominant (but not positive definite).


[edit] Another example for the matrix version

Another linear system shown as

is given by:

and

We want to use the equation

in the form

where:
and

We must decompose triangular component

into the sum of a lower triangular component :

and a strict upper

and

The inverse of

is:

Now we can find:

Now we have

and

and we can use them to obtain the vectors

iteratively.

First of all, we have to choose perform the algorithm. We suppose:

: we can only guess. The better the guess, the quicker will

We can then calculate:

If we test for convergence we'll find that the algorithm diverges. In fact, the matrix A is neither diagonally dominant nor positive definite. Then, convergency to the exact solution

is not guaranteed and, in this case, will not occur.


[edit] An example for the equation version

Suppose given k equations where xn are vectors of these equations and starting point x0. From the first equation solve for x1 in terms of For the next equations substitute the previous values of xs.

To make it clear let's consider an example.

Solving for x1, x2, x3 and x4 gives:

Suppose we choose (0, 0, 0, 0) as the initial approximation, then the first approximate solution is given by

Using the approximations obtained, the iterative procedure is repeated until the desired accuracy has been reached. The following are the approximated solutions after four iterations.

x1 0.6

x2

x3

x4

2.32727 0.987273 0.878864

1.03018 2.03694 1.01446 0.984341 1.00659 2.00356 1.00253 0.998351 1.00086 2.0003 1.00031 0.99985
The exact solution of the system is (1, 2, 1, 1).

[edit] See also


y y y y

Jacobi method Successive over-relaxation Iterative method. Linear systems Gaussian belief propagation

[edit] References
y

Black, Noel and Moore, Shirley, "Gauss-Seidel Method" from MathWorld.

This article incorporates text from the article Gauss-Seidel_method on CFD-Wiki that is under the GFDL license.

[edit] External links


y y y y y y y y

Gauss Seidel from www.math-linux.com Module for Gauss Seidel Iteration Gauss Seidel From Holistic Numerical Methods Institute Gauss Siedel Iteration from www.geocities.com The Gauss-Seidel Method Bickson Matlab code C code example [hide]v d eNumerical linear algebra

Key concepts Problems Hardware Software

Floating point Numerical stability BLAS Matrix multiplication Matrix decompositions Linear equations Sparse problems CPU cache TLB Cache-oblivious algorithm SIMD Multiprocessing Specialized libraries General purpose software

Anda mungkin juga menyukai