Anda di halaman 1dari 17

A ROBUST AND EFFICIENT MULTIGRID METHOD

P. Wesseling Delft University of Technology Department of Mathematics and Computer Science P.O. Box 356 2600 AJ Delft, The Netherlands

i. Introduction.

Both in theory and practice, multlgrld methods solve elliptic boundary value problems in O(N) operations, with N the number of points in the computational grid. This compares favorably with other methods. Only for separable equations on rectangular regions there are methods of similar efficiency; see Schumann (1980) for a recent survey of work in this area. Or, they have a computational complexity of O(N =) with ~ >i. Of these methods, those of the preconditioned Lanczos type deserve special mention, because they have ~=1.25 (as has been proved by Gustafsson (1978) for the Poisson equation, but ==1.25 seems to hold more generally), which is low, whereas they have broad applicability. The original version (ICCG, MelJerink and van der Vorst (1977)) is restricted to self-adJoint equations, but extension to the general case is possible and has been given by, among others, Wesseling and Sonneveld (1980), who also describe numerical experiments comparing a multigrid and a preconditioned Lanczos method, and by van der Vorst (1981). Such comparisons are also described by Kettler and MeiJerink (1981) and Kettler (1982), who furthermore obtain spectacular results by combining a multigrid and a conjugate gradient method. For large-scale calculations in engineering and physics, multlgrld methods potentially far surpass other known methods as far as efficiency is concerned, with the possible exception of preconditioned Lanczos methods. However, routine application of multigrld methods to large-scale problems is at present still hampered by the fact, that the promise of computational efficiency is not always fulfilled in the hands of a non-expert, and that multigrld methods are more complicated than classical iterative methods. There are many ways to implement the basic ideas underlying multigrld methods, and the way in which this is done may make the efficiency problem-dependent. The aim of this paper is to present the details of a multlgrld method, called MGDI for brevity. The method is constructed such that it is perceived by the user Just as any other solver of linear systems. That is, it operates only on the matrix that is given and does not refer to the underlying differential equation and boundary conditions. This means that powerful ideas concerning adaptive multigrld methods put forward by Brandt (1977, 1979) are deliberately not used. Of course, the

615

basic multlgrld methodology is employed, as typified in various ways by the work of Fedorenko (1962), Bakhvalov (1966), Astrachancev (1971), Brandt (1973), Frederickson (1975), Wachspress (1975), Hackbusch (1978a), Wessellng (1977),and

become widely known and appreciated by Brandt (1977). It depends on the user (is he familiar with multigrld methods or not) and on the problem, whether an adaptive or a non-adaptlve approach, such as presented here, is to be preferred. The method MGDI is fast for a large class of elliptic boundary value problems, as will be made plausible and demonstrated experimentally. Operation counts and rates of convergence are given, and a comparison of computational efficiency with other methods is made. The main characteristics of the method are the use of incomplete LU-deeomposltion for smoothing, 7-polnt prolongation and restriction operators, coarse-grld Galerkln approximation, and the use of a fixed multigrld strategy, that will be called the sawtooth cycle. It does not need to be adapted to the problem, and is expected to be useful to the non-speclallst. A FORTRAN program will be described in a forthcoming report.

2. A multi~rid method.

A multlgrid method will he presented for the solution of an elliptic boundary value problem on a rectangle, discretized by finite differences. A computational grid ~ and a corresponding set of grld-functions U are defined as follows: ~ E {(Xo,Yo) U (xi,Yi), xi=xo+ih x, yj=yo+jhy, i,J=i(I)2},

(2.1) u ~ ~ {u~:~~ +~}.


The formulation (2.1) allows elimination of physical boundaries where Dirichlet boundary conditions are given. For example, if all boundary conditions are of Dirichlet type and the region is the unit square one may choose: h =h =h=(2+2) -I, xy X 0 q o = h . It is not really necessary to have the number of x- and y-gridlines equal. By coordinate stretching one can generate a non-equidlstant mesh in the physical plane. The linear algebraic system generated by the difference scheme is denoted by: A%u = f. (2.2)

The multlgrld method makes use of a hierarchy of computational grids ~k and corresponding sets of grid-functions U k, k=-l(-l)l, defined by (2.1) with replaced by k. On the coarser grids (i.e. grids with larger step-size, hence smaller k) equation (2.2) is approximated by: Aku k = fk , k = -i(-I)i. (2.3)

616

Furthermore, operator pk:

let there be given a restriction

operator

rk and a prolongation

rk : U k + uk-i , p k : uk-I + U k. A k, fk, rk, pk will be specified For the so-called decomposition. later.

(2.4)

smoothing process use is made of incomplete LU-(ILU-) suppressing the superscript k, we assume that we have a such that (2.5) the following iteratlve process for

Temporarily

lower and an upper triangular matrix L and U respectively, LU = A + C. L, U and C will be specified later. Consider solving (2.2) or (2.3): ,

x := x + (LU) -I (f-Ax) or x := (LU) -I (f+Cx).

(2.6) in the following quasi-Algol (2.6): program can be regarded

The multlgrld method presented as a method to accelerate multlgrid method MGDI: be~in f-l:= r4C(u_~);

the iteratlve process

for k := 4-1(-1)2 do fk-i := rkfk; u I := (LIuI)-If I; for k := 2(1)-1 do u k := (Lkuk)-l(ckpkuk-l+fk); -4 = u:


u

u 4 + p4u4-1

:= (L4U4)-I(C4~4+f4)

end of one iteration with MGDI;

Using (2.5), one may verify that C4(u4-~ ) = f4-A4u is the residue associated with the current iterand u 4. As we shall see, Chas only two non-zero diagonals, C4(u4-~ ) is a fast way to compute the residue f4-Au. When starting, not available. u hence are

and u

A cheap way to get started is to use the initial estimate u=0, and effectively starts

replace C4(u-~ ) by f4-A4u = f4. In this case the eomputatlon on the coarsest grid, which is advantageous~ correcting available, a perhaps unfortunate

because no effort is wasted in

first guess. If one has a good initial estimate

one may generate u and ~4 as follows:

5 := initial estimate; u := (L4U) -I (C~+f); starts with a smoothing step on the finest grid, which residue calculation.

In this case the computation

costs little more than a straightforward

617

The method is not recursive, and is easily implemented in FORTRAN. Each coarse grid is visited only once, and one smoothing step is performed after each coarse grid correction. This multigrld strategy may be depicted graphically as follows (for 4 grids): k 4 3 2 I

Each dot represents a smoothing operation. This diagram suggests "sawtooth cycle" as an appropriate name for this strategy. This is probably the simplest multigrid strategy that one can think of; cf. Brandt (1977), Hackbusch (1981), Hemker (1981) for a description of many possible multigrid strategies. A measure of the computational cost of one execution of multigrid method MGDI may be obtained by counting the arithmetic operations that are visible in the mathematical formulae. The following table gives the operation count per grid-polnt of ~k for various parts of the algorithm. Ck 3 rk 2 (Lku k ~- I 13 pk 1.5

The results for (Lkuk) -I, r k and pk follow from subsequent sections. With nk=(2k+l) 2 the number of grid-points of 9k, we obtain the following total count: -i -I ~ nk + 13n I + lg.5 ~ n k + 2.5n + 17n ~ 30.4 , k=2 k=2

6n + 2

or about 30 operations per grid-point of ~. The decisions made in the design of MGD1 are based on comparative experiments described by Wesseling (1980) and Mol (1981).

3. Incomplete LU-decomposition.

ILU- decomposition was used by MelJerink and van der Vorst (1977) as an effective preconditioning for conjugate gradient methods, and was introduced by Wesseling and Sonneveld (1980) as a smoothing process for multigrid methods. For experiments with and analysis of various ILU smoothing processes, see Wesseling and Sonneveld (1980), Hemker (19g0a), Mol (1981), Kettler and MeiJerlnk (1981), and the extensive treatment by Kettler (1982). The general second order elliptic differential operator can be approximated by

618

central

or one-sided finite

differences

using

the 7-point depicted

difference

molecule

here.

If no mixed derivative c :

is present

atoms b and f are superfluous, 5-point molecule. grid

i
~ a "

d ~e

and we have the familiar Let the points ~k be ordered (2,0),...,

of the computational as follows: (0,i), (0,0), (i,i),

(I,0), (2,1),..., labeled (2k,2k).

[ i
-b each of possibly we mean: as follows. In the e, 8~ Y d, e, f, At locations

(2,0),

Then the finite difference as a, b,...,g,

matrix A k has 7 non-zero which corresponds

diagonals,

left-to-right

with the atom with the same label.

By non-zero

non-zero. The ILU-decompositlon same locations respectively. to be employed here can be described to have non-zero

as a, b, c, L is prescribed The main diagonal diagonals

diagonals

of L is specified

to be unity.

g, U has non-zero

6, e, ~, q, respectively. by Crout-like

The rest of L and U is zero. L as follows, on an m*n

and U can be conveniently grid. Subscript

computed

formulae,

k is the row-number. ' ' 8k = (bk-ekek-m)/~k-m+l ' ' (3.1)

ek = ak/~k-m

Yk = (Ck-~k~k-m)/6k-I ek = ek-Skqk-m+l Quantities '

6k = dk-~Ykek-l-Bk~k-m+l-~kqk-m ~k ffi fk-Ykqk-I ' qk = gk" by O.

that are not defined

are to be replaced

The error matrix C = LU-A has only two non-zero and inside b and f, respectively, ~k = Bk~k-m+l The solution ' Xk = Tk~k-i " and given by:

diagonals

~ and X, located

next to

(3.2) by back-substitution: ' i=l(1)mn ;

of LU = q is obtained - BiUi-m+l

ui := qi - Tiui-i

- alUi-m

(3.3)
u i := (u i - eiui+ I - ~iUi+m_l It is easily verified, and the solution respectively. The computation hence a complete stored of Cu+f takes 4 operations, smoothing because C has only 2 non-zero 17 operations. diagonals, of LUu=q - qiUi+m)/~ i , ifmn(-l)l. of L and U, the construction operations of C,

that the construction takes

17,2 and 13 arithmetic

per grid-point,

step u:=(Lu)-l(Cu+f)

takes

L and U can be

in the space for A, because this requires

A is not needed.

If A has only 5 non-zero Storage of C

diagonals,

an extra storage

of 2 reals per grid point.

also requires stored,

2 reals per grid-point,

but C can also be computed step increases

instead

of being

in which

case the cost of a smoothing

from 17 to 19. The

619

memory requirement for the quantities defined on the coarse grid is about (1/4+1/16+..)=1/3 of the memory requirement on the finest grid. We will not dwell upon the existence and stability of the ILU-decomposition Just described. MeiJerink and van der Vorst (1977) have shown existence if A is a symmetric M-matrix, i.e. aij=aji , aij~O for l=j and A-I)0. They also note that one has existence under much more general circumstances. We have found experimentally, that the ILU-decomposition exists and provides an efficient

smoothing process, if in the non-self-adjolnt case a sufficient amount of artificial viscosity is introduced on the finest grid. This is also necessary for all other smoothing processes that we know of. In order to make Fourier methods applicable, In smoothing (and two-level) analysis it must be assumed that the values of the elements of L and U do not vary along diagonals; if (3.1) is used this is usually not the case near boundaries, and in certain strongly anisotropic diffusion problems the influence of the boundaries on the ILU-decomposition extends inwards over many meshes. In such cases smoothing and two-level analysis are not realistic for the present method MGDI.

4. Prolongation and restriction.

Let the value of the grid-function uk in the point (s.2-k,t.2 -k) he denoted by Ust" k Prolongation and restriction are defined by: (pkuk-l)2 s k-I ,2t = Ust ' 1 ~ k-I k-I ,2t+l = 2 (Ust +Us,t+l)' k k-i I k-i k-i (p u )2s+l,2t = 2 (Ust +Us+l,t) ' (4.1) k k-i 1 . k-I k-i (p u )2s+l,2t+l = ~ (Ust +Us+l,t+l)"

(pkuk-l)2 s

(rkuk)s t

k I k +u k = u2s,2t + ~ (U2s+l,2t 2s,2t+l

(4.2)
k k k k + u2s_l,2t + U2s,2t_ 1 + U2s+l,2t_ 1 + u2s_l,2t+l)The following diagrams may clarify the structure of pk and rk.

This p

provides linear interpolation, while having a sparser matrix representation

than all other linear interpolation operators, and rk is its adJoint, in the sense that (pkuk-I k = (uk-i k k vk k ,v )k ,r v )k-l' V c U , (4.3)

620

with (uk,vk)k E ~

k k uijvij.

Because r k is a weighted average of 7 points, we call

this 7-point prolongation

and restriction.

5. Galerkin coarse grid approximation.

The coarse grid operators A k are defined as follows: Ak-i = r k A k p k ,


k = (-I)2. (5.1)

We call this a Galerkln approximation

because , V

(5.1) implies:

(Akpkuk-1 ,pk v k-i.)k = (Ak-luk-1

,vk-l)

vk-1

uk-1

(5.2) subspace.

if (4.3) holds; hence, we have a case of projection

in a lower-dimenslonal

A more obvious way to generate A k-I is to use a finite difference method. Under k (5.1), if rk and p are as in section 4, standard central equidistant finite difference approximations of the operators ~2/ax2, a2/~y 2, a2/axay using the 7-point

difference molecule approximation variable,

of section 3 are invariant,

hence Galerkin and finite difference occur, or the coefficients are

are identical.

When lower derivatives

or the mesh non-equldistant boundaries

(which is the case on coarser grids if on the the two approximations differ.

finest grid Dirichlet

are eliminated),

Their mutual relationship difference

closely resembles

the relationship

between finite

and finite element approximations.

The Galerkln method automatically such as

generates accurate approximations changing mesh-size

and takes care of special circumstances,

or varying coefficients,

but the numerical analyst can always finite difference approximation. The

achieve the same accuracy with an ably designed transformation molecules 0 -I 0 1 0 0 0 of upwind differences

by (5.1) is interesting;

the difference

on the finest and five coarser grids are given below: -I -5 I 4 -I 1 1 -5 -15 5 8 -5 7 5 -21 -51 21 16 -21 35 21

(*2 -2 ) -85 -187 85 32 -85 (*2~5) 155 85 -341 -715

(*2-3) 341 64 -341 (,2-6) 651 341

(*2 -~ )

For the derivation of these molecules, used. Apparently, upwind differencing

the formulae given by Mol (1981) have been is gradually replaced by central differencing, Diagonal

plus a higher order truncation

error containing a mixed third derivative.

621

dominance is lost, but in practice we have never encountered numerical "wiggles" or instability of the ILU-decomposltion; note that as the grid gets coarser, the ILUdecomposition becomes more exact. Later, succesfull experiments with the upwinddiscretlzed convection-diffusion equation will be reported. One way of programming (5.1) is as follows. It is based on a datastructure used earlier by Frederickson (1975). In this section, Greek subscripts are 2-tuples identifying points of the computational grid, e.g. ~=(~l,a2) indicates the gridpoint with indices (~i,~2) (el. the ordering introduced in section 3). To the atoms of the difference molecule 2-tuples are assigned according to the accompanying diagram; these are also identified by Greek subscripts. By A~8 we denote the element of the matrix Ak in row number l+=l+ma 2 and column l+=l+m=2+Bl+mB2, with m the number of grld-points of ~k in the 0,-i" - i,-i -l,l& [ | _I,C [. i0,1

!
I0'0

x-dlrection. For example, Bffi(l,-l) corresponds with the b-diagonal of section 3. If k is outside Rk or B is outside the molecule then Aa8 is defined to be zero. With these conventions, matrlx-vector multiplication can be formulated as follows : (Akuk)a = ~ Aka8 uk 8 , + (5.3)

with range B = Z x Z. Restriction and prolongation can be represented as follows: (rkuk) I y k = ~ ~ BBu2a+~ , I ~ k-1 = ~ ~a_2BuB , (5.4)

k k (p u )

(5.5)

with the weight factors ~8 defined by (cf. (4.1) and (4.2)): if B is inside the molecule, then ~B=I, except ~0,0=2; outside the molecule, kk ~B=0. Eq. (5.5) is not a convenient way to compute p u , but it can be used to derive a useful formula for rkAkp k. From (5.3)-(5.5) it follows that for any uk-l: k.k k k-1. (r a p u ) 1 k I k-I = ~ ~ ~B ~ A2a+B,, ~ ~ ~2a+B+y_2~ u6 (5.6)

Since the range of B, Y and ~ may be taken to be all of Z x Z, change of variables is easy. Let ~'=a+6, y'=B+T-2~', then (5.6) takes on a form from which we may conclude (omitting primes): i Ak (rkAkpk)~ = 4 B!~ ~8~Y 2~+8,26+y-B (5.7)

It is found that if Ak has a general 7 (or fewer)-point structure, then A k-I has a 7-point structure. An important point is that 2~+T-B is only 95(61) times inside

622

the molecule of Ak if Ak is a 7-(5-)point molecule, for all 73=343 possible combinations of 8,7 and 6. This is exploited by putting the u-loop inside the 6-, Band 7-1oops, and arrange the computation as follows: A k-I :~ 0; for 6 ~ molecule d~o fo_~r B ~ molecule d o for ~ ~ molecule while 2~+y-B E molecule do begin ~ - ~8~7; Rk-i

for a ~

while 2a+B ~ ~

do

Ak-1 Ak-1 k a~ := a~ + ~A2a+8,2~+y-B end;


A k-I := Ak-i/4;

The cost of the inner loop is 2 operations, hence the total cost is 2"95(2"61) for a 7-(5-) point operator A k. The division by 16 adds one operation, so that our final conclusion is, that the construction of A k-I takes 191 (123) operations per grid-polnt of Rk-I for a 7-(5-) point operator A k. The total work for the construction of A k, k=-l(-l)l in operations per grld-polnt of R is about 191/3=64(7-polnt A), or 123/4+lgl/12=47(5-polnt A). This has to be done only once, before the multlgrld iterations start. It depends on the complexity of A whether computation of A k, k < is cheaper with the finite difference method or with the above Galerkln method. But more important is the fact, that the Galerkln method enables us to work only with the given matrix A ~ and not to refer to the underlying problem (equation and boundary conditions), and that always good coarse grid approximations are obtained automatically.

6. Numerical experiments

Standardized test-problems are useful for demonstrating and comparing the applicability and performance of multlgrid methods. If the coefficients are constant, smoothing analysis (of. Brandt (1977)) can help to understand and construct efficient smoothing processes. Two-level analysis (of. Brandt and Dinar (1979), Foerster et al. (1981), Ries et al. (1981)) realistically predicts the rate of convergence of certain (not all, for example not for the sawtooth cycle described previously) multigrid methods; f6r a certain method applied to the Polsson equation Braess (1981) has given a rigorous prediction. Such analyses use Fourier methods; see Hemker (1980b) for an introduction to the Fourier analysis of multigrid methods. Where Fourier analysis is not applicable, one can, given sufficient computer time,

623

compute the spectral norm and spectral radius (i.e. the asymptotic rate of convergence) numerically, or Just observe the rate of convergence and use heuristic arguments in order to verify the soundness and efficiency of a multigrid method. In practice, quite often the observed rate of converge of a good multigrld method is considerably better than the asymptotic rate, which is not reached because already after a few iterations discretlzatlon or even machine accuracy is obtained. The test problems should be standardized, so that results reported by different authors can be easily compared and reproduced. The fact that constant coeffient test problems have a very special type of spectrum (el. Curtis (1981)) makes them somewhat exceptional; therefore test problems with variable coefficients should also

be included. Unless one wishes to design a method especially for a specific problem the special properties of a given test problem, such as for example a coefficient being constant, should not be exploited, and not be taken into account in operation counts. We have looked for suitable test problems that have already been treated by other authors, and will report results for the following problems: (i) (ii) ~xx + Cyy = 4 , Sxx + 0.01 ~yy = 2.02, (6.1) (6.2) (6.3) (6.4) (6.5)

(iii) 0.01 ~xx + ~yy = 2.02 , (iv) (v) ~xx + 1.7 Sxy + ~yy = 4 , USx + V~y = O.001(~xx+~yy ) - i , (u,v) : (1,O), (O,1), (i,i), (1,-1),

(a)
(vi)

(b)

(c)

(d)
(6.6)

(a~x) x + (a~y)y = 0,

a = Isln kx sin ky I.

The region is ~ = (0,i) x (0,i). The computational grid is equidistant. For 22 problems (i) - (iv) the boundary condition is ~I~| = x +y , exact solution: = x2+y 2. For (v) and (vi), ~ ~ = O, exact solution not known for (v), zero for

(vi). Lack of time prevented us from including cases with Neumann boundary conditions, but this should certainly be included, because the rate of convergence, of some multigrid methods (but not MGDI, except for (lii)) is affected by the type of boundary condition. Standard 5-point central differencing is used on the finest grid except for (v), where upwind differencing according to ll'in (1969) is employed. There is no particular reason for using ll'in discretization in the present context; any other form of upwind discretization would lead to roughly the same results (this is an example where Galerkln coarse grid discretization is cheaper than finite differences, because of the cost of the ll'in coefficients). The initial guess of the

624

solution is ~EO, except for (vl), where the initial ~ is uniformly distributed. The boundary conditions are not eliminated

randomly

from the equations.

Besides being important of Fourier analysls/cyclic

for many applications reduction

in its own right (but fast solvers (1980)) exist already (1) typifies self(ll) and

type (see e.g. Schumann

for some time, and are competitive with multlgrld methods)

problem

adJolnt elliptic equations with smoothly varying coefficients. (ill) represent anisotropic stretching convergence in one direction. of some methods diffusion problems

Problems

or problems with strong coordinate almost identical, the rate of between (il)

Although mathematically

(including MGDI) may differ significantly

and (ill). In (iv) we have a mixed derivative, mathematical coordinate physics,

which does not occur often in Non-orthogonal

but which we include for completeness. give rise to mixed derivatives.

transformations

The convectlon-dlffusion

problem (v) represents in fluid dynamics. continuously

a singular perturbation (vi) represents

problem of a type that is ubiquitous

Finally,

problems with slowly or rapidly but

varying coefficients. represented by these test

Because some or even all of the specific difficulties problems may occur simultaneously in a given application,

a method should perform efficiently than for

well for more than one problem. The method MGDI described here performs for all six test problems,

although for (ill) it is somewhat less efficient applications

the other five. For fluid mechanical for reservoir engineering

(1) and (v) should be mastered, and in

the method should be able to handle all problems, test problems of Stone and Kershaw (1982)), in which the coefficients

addition the more difficult Meijerlnk

(see Kettler and are discontinuous, test problem seems

(1981), ef. Kettler

and the region non-rectangular. desirable; see Wesseling

The inclusion of a Navier-Stokes (1980) for some results.

and Sonneveld

In the following

table we list publications

that give results for the testand boundary conditions may be

problems above. In some cases, different, Test-problem Brandt (1977) (1978h) (1979) (i) * *,A *,A * *,A *

the rlght-hand-slde

(il) (ill)

(iv)

(v)

(vl)

Hackbusch Nicolaldes W,S,M F,R,B,S

* * * V *

Table 6.1 Publications Wesseling

Publications

on test-problems

(i) - (vi)

that consider related methods are grouped together. W,S,M stands for (1980), Wesseling (1980), Mol (1981); F,R,B,S for Foerster (1981), StHben et al. (1982). The bulk of In these publications small

and Sonneveld

et al. (1981), Ries et al. (1981), B~rgers

the results for MGDI quoted here are taken from W,S,M.

625

inconsequential conditions,

differences

occur due to the fact, that M eliminates

the boundary

whereas W does not. Where the symbols vary along a row in table 6.1, test-problems, otherwise the same

different algorithms were used for different algorithm was used. Where no entry occurs, disposal;

this means that we have no results at our Some publications treat other

the method may or may not be applicable. not discussed here, as well. table gives results

test-problems,

The following

for method MGDI.

(1)
M,%
p,t 8,6 0.033, 20

(ll)
10,6 0.15, 36

(Ill)
4,4 0.0016, ii

(iv)
7,6 0.025, 19

(va)
M,
p,t

(vb) 2,4
7"I0 -5, 7

(vc)
1,4
3"10 -9 , 4

(vd)
4,4 0.040, 21
(1) - (v). the

3,4 0.0030, 12

Table 6.2 Method MGDI applied to test-problems In table 6.2, M is the number of iterations number of grld-polnts (2+1)*(2+1)

that were carried out; determines

of the finest grld; P is the average reduction

factor, defined by: pM=quotlent after M iterations; t=-30/101ogp

of Euclidean norms of residues Ax-f before and is the number of operations per grid-polnt of the were

finest grid for 0.i reduction of the residual. eliminated; this is of no consequence.

For (v) the boundary conditions

Of course,

P depends on M and on the initial guess, hence,

P is afflicted with a

certain arbitrariness, determined numerically,

which the spectral norm and radius lack. For (i) it has been that the spectral radius p~=O.090 (t=29).

Clearly,

for (ll) the results are worse than for the other cases. For this case

the smoothing factor of ILU goes to i as e+0, and with Neumann boundary conditions along x=O, x=l MGDI does not work (in that case the differential badly posed). Kettler (1982) gives and ILU-decomposltlon boundary conditions, problem (6.2) is

which does not suffer from

this defect. With Dirlchlet dependable experiments

however, MGDI seems to be

for (il), as is suggested by the following results of more extensive with (il); P~ is the spectral radius.

\c 3 4 5 6

0.5 .038 .091 .I0 .I0

I0-I .ii .22 .26 .27

10-2 .042 .19 .41 .55

10-4 .001 .003 .017 .068

Table 6.3

Estimated

p~ for Sxx+e~yy = 2+2e, ~ Ia~ = x2+y2.

626

Comparison with the work of Hackbusch because his method is similar to MGDI, checkerboard (CH) or zebra-Gauss-Seidel and restriction.

(1978) is relatively

straightforward,

the maln differences relaxations

being the use of two and the use work

(Z) for smoothing,

of 9-point prolongation

The main difference

in computational

will be due to the difference two applications

in smoothing strategy.

Assuming variable coefficlents, Solution of a tri-diagonal are stored at a cost of 2 we arrive at a cost

of CH take 18 operations

per grld-polnt.

system takes 8 operations, reals per grid-point, of two applications

or 5 if the LU-decompositions

Including

the cost of the right-hand-sides

of Z of 24, or 18 with extra storage,

except for test-problem (iv).

(iv), where the cost is 32 or 26. CH is not applicable Recalling

to (il), (ill),

that the cost of ILU is 17 and the total cost of MGDI is 30, we estimate to be 32 with CH, 42 with Z or 32 (iv). From Hackbusch (1978b) we

the total cost of the methods used by Hackbusch with extra storage,

and 56 or 46 for test-problem table.

then deduce the following

(1), CH M, p,t 6,6 .048,24

(1), Z 8,6 .038,30(23)

(ll), Z 8,6 .063,35(27)

(iv), Z 8,6 .199,60(46)

Table 6.4 Computational

cost of methods of Hackbusch

(1978b).

CH: checkerboard-Gauss-Seidel; Between brackets: Z is not applicable

Z: zebra-Gauss-Seidel.

with extra storage. the lines are chosen in the x-direction. Also

to (iii), because

including y-lines would perhaps change the efficiency

for (i) and (iv) little, but

almost double the cost for (ii) and (iil). Because MGDI and the methods of Hackbusch have much in common this is mainly a practical comparison smoothing methods. The results of Brandt been extended by F,R,B,S, combinations W-cycle (1977) will not be discussed, because thls early work has for a variety of In the so-called of Gauss-Seidel and ILU

who give results of two-level analysis prolongations

of restrictions,

and smoothing processes.

(with double the two-level cost) two-level analysis usually gives a good (using CH smoothing and total-reductlon costffi23.5, tffi21. In the cost are exploited. The best has for (i),

estimate for p~. For (i) the best method concepts)

in Ries et al. (1981) results in: p=0.074,

estimate the special values of the Polsson coefficients method in St~ben et al. (1982) (ii) and (iii): (using alternating

direction Z smoothing)

p~=0.023, costffi51.6, tffi31.5, ( i ) , p~ffiO.119, cost=51.6, tffi55.8, ( i i ) and ( i i i ) ,

627

with the cost estimate valid for variable included by BSrgers (1981);

coefficients.

For (v) a detailed analysis for (v) with

is

for the best method

(using CH smoothing)

e-lO -5 on a 65x65 grid the following pffi0.41, cost=28, tffi72, (va),

results emerge:

pffiO.37, costffi28, tffi65, (vc), with p the average reduction factor over the last 14 of 20 iterations. comparing with table 6.2 one has to keep in mind, initial rate of convergence, this is irrelevant, When

that table 6.2 gives only the For (va,b,c)

observed during the first few iterations.

because here MGDI is almost exact. For comparison we give the t-56.

following numerical estimate for (vd) with EffilO on a 6565 grid: p,=0.29, -5 This concludes Nicolaldes our comparison with F,R,S,B. (1979) reports experiments with two multigrid-finlte-element

methods.

One of these, using "linear elements",

results for (1) in the same system of equations here. Prolongation, restriction and coarse

on the finest grid that we are considering grid approximation Gauss-Seldel measurements

are much the same as in MGDI, but smoothing consists of a few before and after coarse grid correction. relaxation, CPU-tlme

relaxations

are given in units of a Gauss-Seldel

for which we take a cost (cf. table 6.2) is zero. For (vi) to a

of 9 (assuming variable coefficients). reported, Nicolaldes

On a 64*64 grid t=35.1

with a smooth initial guess, for (1) with rlght-hand-slde (1979) reports experiments

with bilinear elements only corresponding is 17 (for variable

9-polnt dlscretization. coefficients). k N MGDI 2 76 25

Here the cost of one relaxation

The following results are obtained. 4 82 25 8 99 25 16 107 26 32 104 26 (vi)

Table 6.5 Values of t for test-problem

Three multlgrld

iterations were carried out on a 65x65 grid; k is the coefficient The initial guess is uniformly differences

in

(6.6), t as in table 6.2, N stands for Nicolaldes. randomly distributed. coarse-grld differences;

Because in this case there are singiflcant approximation,

between

Galerkln and difference

we have also tried MGDI with finite

this is denoted as MGDI*. the spectral radius pm, and obtain the following

On a 33x33 grid we have estimated result. k MGDI MGDI* 8 .31 .30 16 .18 .30 32 .13 >I

T a b l e 6 . 4 V a l u e s o f p f o r

test-problem

(iv).

628

As is to be expected, deteriorates increases.

with coarse-grld

finite difference

approximation

the method equation

as the rate of variation of the coefficients

in the differential

This can probably be remedied by taking a suitable average of the when constructing coarse-grld finite difference approximations. The

coefficients

Galerkin approximation

used in MGDI does this automatically.

7. Final remarks.

A multlgrid method

(MGDI) has been presented,

that is perceived by the user as any of multlgrid

other linear systems solver, and requires no insight in the properties methods. The user has to specify the matrix and the rlght-hand-slde approximations

only. This is made strategy,

possible by using Galerkln coarse-grld the so-called dlscretlzation

and a fixed multlgrld

sawtooth cycle. The matrix should represent a 5- or a 7-point of an elliptic equation on a rectangle in the usual way.

In cases with rapidly varying coefficients, advantage of providing better coarse-grld differences.

the Galerkin method has the additional than straightforward finite

approximations

MGDI works well for a large variety of problems, singularly anlsotroplc perturbed equations,

including non-self-adjolnt and strongly

equations with a mixed derivative, except for a certain combination This restriction

diffusion problems,

of anlsotroplc and the method

direction and Neumann boundary conditions. is generalized Meljerlnk

is removed,

to more general domains and discontinuous (1982).

coefficients

by Kettler and

(1981), cf. Kettler

Comparison with available MGDI and the abandoning efficiency.

results for other methods shows,

that the robustness

of

of any form of adaptlvlty

is not paid for in terms of to be somewhat faster than

In fact, at the moment MGDI seems generally

other, more specialized methods, coefficients.

except when these exploit special values of the

In the near future even more efficient methods may evolve, because it seems unlikely that the full potential of the various methods is already fully exhausted. (Kettler and MelJerink is

The combination

of conjugate gradient and multlgrld methods el. Kettler

(1981)) is very promising, very rich. Various elements process, prolongation,

(1982). The structure of multigrld methods smoothing

of the method can be chosen in many ways; coarse-grld approximation,

restriction,

multigrld

cycle. One

can speak of a multlgrld

"philosophy",

or perhaps better of a multlgrld methodology. just as in other areas of

No single "best" or "standard" numerical mathematics.

method is likely to emerge,

629

References G.P. Astrachancev, An iteratlve method of solving elliptic net problems. USSR Comp. Math. Math. Phys. ii, 2, 171-182, 1971. N.S. Bakhvalov, On the convergence of a relaxation method with natural constraints on the elliptic operator, USSR Comp. Math. Math. Phys. ~, no.5, 101-135, 1966. C. B~rgers, Private communication: Mehrgltterverfahren fur eine Mehrstellendlskretislerung der Poisson-Glelchung und fHr eine zweldimenslonale singular gest~rte Aufgabe. Diplomarbeit (prof. Trottenberg), 1981. D. Braess, The contraction number of a multlgrld method for solving the Polsson equation. Num. Math. 37, 387-404, 1981. A. Brandt, Multi-level adaptive technique (MLAT) for fast numerical solution to boundary value problems. In: Proe. Third Int. Conf. Num. Meth. Fluid Dyn., Paris, 1972, Lect. Notes in Phys. 18, 82-89, Springer-Verlag, Berlin etc. 1973. A. Brandt, Multi-level adaptive solutions to boundary-value problems, Math. Comp. 3 1 333-390, 1977. A. Brandt, Multi-level adaptive solutions to singular perturbation problems. In: Numerical analysis of singular perturbation problems (P.W. Hemker, J.J. Miller, eds.), 53-142, Academic Press, New York, 1979. A. Brandt and N. Dinar: Multi-grld solution to elliptic flow problems. In: Numerical methods for partial differential equations (S.V. Parter, ed.), 53-149. New York etc., Academic Press, 1979. A.R. Curtis, On a property of some test equations for finite difference or finite element methods. IMA J. Num. Anal. i, 369-375, 1981. R.P. Fedorenko, A relaxation method for solving elliptic difference equations, USSR Comp. Math. Math. Phys. i, 1092-1096, 1962. H. Foerster, K. StHben, U. Trottenberg, Non-standard multlgrld techniques using checkered relaxation and intermediate grids. In: Elliptic problem solvers (M. Schulz, ed.), 285-300, Academic Press, New York etc., 1981. P.O. Frederlckson, Fast approximate inversion of large sparse linear systems, Mathematics report 7-75, Lakehead University, Thunder Bay, Canada, 1975. I. Gustafsson, A class of first order factorlzatlon methods. BIT 18, 142-156, 1978. W. Hackbusch, A fast iteratlve method for solving Poisson's equation in a general region. In: Numerical treatment of differential equations, Oberwolfach 1976 (R. Bulirsch , R.D. Grigorieff, J. Schr~der, eds.). Lecture Notes in Math. 631, Sprlnger-Verlag, Berlin etc., 1978a. W. Hackbusch, On the multlgrid method applied to difference equations, Computing 2 0 pp. 291-306, 1978b. W. Hackbusch, On the convergence of multi-grid iterations. Beitr~ge zur Numerischen Mathematlk ~, 213-239, 1981. P.W. Hemker, The incomplete LU-decompositlon as a relaxation method in multi-grld algorithms. In: Boundary and interior layers - computational and asymptotic methods, Proceedings, Dublin 1980, (J.J.H. Miller, ed.), Boole-Press, Dublin 1980a. P.W. Hemker, Fourier analysis of grldfunctlons, prolongations and restrictions. Mathematical Centre, 413 Kruislaan, Amsterdam, Report NW 93/80, 1980b. P.W Hemker, Introduction to multlgrld methods, Nieuw Archlef v. Wiskunde 29, 71-101, 1981. A.M. ll'in, Differencing scheme for a differential equation with a small parameter affecting the highest derivative. Math. Notes Acad. Sc. USSR ~, 596-602, 1969. R. Kettler, in the present volume (1982). R. Kettler and J.A. Meljerink, A multigrld method and a combined multigrid-conjugate gradient method for elliptic problems with strongly discontinuous coefficients in general domains. KSEPL, Volmerlaan 6, RiJswljk, The Netherlands, Publication 604, 1981. J.A. MeiJerlnk and H.A. van der Vorst, An iteratlve solution method for linear systems of which the coefficient matrix is a symmetric M-matrix, Math. Comp. 31 pp. 148-162, 1977. W.J.A. Mol, On the choice of suitable operators and parameters in multigrid methods, Report NW 107/81, Mathematical Centre, Amsterdam, 1981. R.A. Nicolaides, On some theoretical and practical aspects of multigrld methods. Math. Comp. 33, pp. 933-952, 1979.

630

M. Ries, U. Trottenberg, G. Winter, A note on MGR methods. Universlt~t Bonn, preprint no. 461, 1981. U. Schumann, Fast elliptic solvers and their application in fluid dynamics. In: W. Kollmann (ed.), Computational fluid dynamics, Hemisphere, Washington etc., 1980. K. StUben, C.A. Thole, U. Trottenberg, private communication, 1982. H.A. van der Vorst, Iteratlve solution methods for certain sparse linear systems with a non-symmetrlc matrix arising from PDE-problems. J. Comp. Phys. 44, 1-20, 1981. E.L. Wachspress, A rational finite element basis, chapter I0. Academic Press, New York, 1975. P. Wessellng, Numerical solution of the stationary Navler-Stokes equations by means of a multiple grid method and Newton iteration. Report NA-18, Delft University of Technology, 1977. P. Weaseling, Theoretical and practical aspects of a multlgrld method, Report NA-37, Delft University of Technology, 1980. P. Wessellng and P. Sonneveld, Numerical experiments with a multiple grld and a preconditioned Lanczos type method. In: Approximation methods for Navler-Stokes problems, Proceedings, Paderborn 1979 (R. Rautmann, ed.), Lecture Notes in Math. 771, 543-562, Springer-Verlag, Berlin etc. 1980.

Anda mungkin juga menyukai