N
i=1
w([x x
i
[)(p(x
i
) u
i
)
2
,
where p(x):=b
c R
m
.
This minimization problem results in
a(x) = (BW(x)B
)
1
BW(x)u, (1)
and we can dene the IMLS kernel functions
u(x):=W(x)B(B
W(x)B)
1
b(x), (2)
where
B:=
b
1
(x
1
) b
1
(x
n
)
.
.
.
.
.
.
.
.
.
b
n
(x
1
) b
n
(x
n
)
_
_
_
_
_
_
_
_
,
and
W(x):=diag(w
1
(x); . . . ; w
n
(x)).
MLS functions are sometimes erroneously referred to as
interpolation, whereas these are only approximative. In fact,
the choice of the weights is decisive. In order to get the
interpolation [1] the condition lim
xx
i
w([x x
i
[) = o
must be satised.
The moving least squares (MLS) approximation func-
tion can be written in the form
u(x) =
N
i=1
u
i
j
i
(x)=:u
u(x). (3)
Lancaster and S
N
i=1
x
q
j
i
(x) = x
q
for 0pqpn. (7)
2.2. Partition of unity
For a special case of n = 0 (the 0th consistency order) we
obtain
N
i=1
j
i
(x) = 1, (8)
which means that the kernel functions computed for any
point in the domain build a partition of unity.
The IMLS interpolation is called Shephard interpola-
tion if the basis b(x):=1 is employed. The Shepard
interpolant is not suited [6] to compute a derivative of
any order.
Fig. 1 presents the MLS and IMLS kernel functions in
1d, respectively, with the parameter a of the weight
function (6) equal to 2.
One can clearly see the difference between the approx-
imating and the interpolating case. The IMLS kernel
functions enjoy the Kroneckers delta property
j
i
(x
j
) = d
ij
, (9)
which implies u(x
i
) = u
i
. This is not the case for MLS
kernel functions. In both cases, however, the kernel
functions build a so-called partition of unity, represented
ARTICLE IN PRESS
H. Netuzhylov / Engineering Analysis with Boundary Elements 32 (2008) 512516 513
by black lines within Fig. 1, i.e.
N
i=1
j
i
(x) = 1. (10)
Applying the gradient operator we obtain that the sum of
the gradients of all kernel functions constitute a partition
of nullity:
N
i=1
Vj
i
(x) = 0. (11)
3. Singular matrix
In the approximating case, i.e. when we use the MLS
method, the inversion of the matrix in (2) does not cause
many problems. Since this is a small (mm)-matrix, where
m depends on the order of the basis, the inversion can be
done in many cases via the LU-decomposition with
pivoting with relatively little effort. Nonetheless, if the
matrix is ill-conditioned, the LU-decomposition may lead
to wrong results [3]. One should therefore use QR- or
singular value decomposition (SVD), see [7] or [8] for
details.
We run into much more trouble if we want to use IMLS.
The singularity problem at the point x = x
i
in (6),
and therefore also in (2), which occurs due to the specic
choice of the weight function can be overcome using the
strategy originally presented in [3], where the author
derives the formulae for the rst derivative. We extend
his approach and obtain also the formulae for the second
derivative.
In order to cure the singularity problem we introduce
a regularization by using a small positive number
within the regularized weight function matrix W(x ).
Then, (2) reads as
j(x ) = W(x )B(B
W(x )B)
1
b(x ). (12)
In order to study the case 0 we dene the matrix
M(x ):=B
W(x )B
and split it up into two matrices taking out the singularity
as follows:
M(x
i
) =
M(x
i
) b(x
i
)w()b
(x
i
), (13)
whereby the elements of
M are dened as
M
kl
(x
i
) =
N
s=1
sai
w(x
i
x
s
)b
k
(x
i
)b
l
(x
i
). (14)
Note that the
M is now regular and (in theory) a perfectly
invertible (still ill-conditioned though) matrix, since the
singular weighting factor w() w
i
(x
i
) has been taken out.
However, in order to get an admissible result one has to use
singular value decomposition (SVD) algorithm [9].
Using the ShermanMorrison formula [3], we get the
inverse of the original matrix M in terms of the inverse of
the modied matrix
M
M
1
kl
(x
i
) =
M
1
kl
w()
1 w()s
m
s;t=1
M
1
ks
M
1
lt
b
s
(x
i
)b
t
(x
i
),
(15)
where s =
m
k;l=1
M
1
kl
b
k
(x
i
)b
l
(x
i
).
Substituting w() by w()=
a
, where w() is the non-
singular weight function, leads to
w()s
1 w()s
=
w()s
a
w()s
= 1 O(
a
), (16)
ARTICLE IN PRESS
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
-0.2
0
0.2
0.4
0.6
0.8
1
0
1.2
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Fig. 1. Kernel functions for the approximating (a) and the interpolating (b) case in 1d.
H. Netuzhylov / Engineering Analysis with Boundary Elements 32 (2008) 512516 514
w()
1 w()s
=
w()
a
w()s
= s
1
O(
a
), (17)
1
1 w()s
=
a
a
w()s
= 0 O(
a
). (18)
Applying the Taylor expansion to the non-singular terms in
(2) we obtain
j
j
(x
i
) = w
j
(x
i
)
m
k=1
m
l=1
M
1
kl
(x
i
)b
k
(x
j
)
_
b
l
(x
i
) Vb
l
(x
i
)
2
2
V
2
b
l
(x
i
)
_
O(
3
)
__
; jai (19)
and
j
i
(x
i
) = w
i
()
m
k=0
m
l=0
M
1
kl
(x
i
)b
k
(x
i
)
_
b
l
(x
i
) Vb
l
(x
i
)
2
2
V
2
b
l
(x
i
)
_
O(
3
)
__
. (20)
Using (15)(18) the kernel functions (19) and (20) read as
j
i
(x
i
) = 1 s
1
m
k;l=1
M
1
kl
b
k
(x
i
)Vb
l
(x
i
)
2
2
s
1
m
k;l=1
M
1
kl
b
k
(x
i
)V
2
b
l
(x
i
) O(
3
), (21)
and
j
j
(x
i
) = w(x
i
x
j
)
m
k;l=1
Y
kl
(x
i
)b
k
(x
j
)Vb
l
(x
i
)
2
2
m
k;l=1
Y
kl
(x
i
)b
k
(x
j
)V
2
b
l
(x
j
) O(
3
), (22)
where
Y
kl
(x
i
) =
M
1
kl
s
1
m
s;t=1
M
1
ks
M
1
lt
b
s
(x
i
)b
t
(x
i
). (23)
Note that the
M is now regular and (in theory) a per-
fectly invertible matrix, since the singular weighting factor
w() w
i
(x
i
) has been taken out.
Hence, for 0 we get from (21) and (22) the
Kronecker-delta property for the kernel functions and also
formulae for its derivatives:
Vj
i
(x
i
) = s
1
m
k;l=1
M
1
kl
b
k
(x
i
)Vb
l
(x
i
), (24)
Vj
j
(x
i
) = w(x
i
x
j
)
m
k;l=1
Y
kl
(x
i
)b
k
(x
j
)Vb
l
(x
i
), (25)
V
2
j
i
(x
i
) = s
1
m
k;l=1
M
1
kl
b
k
(x
i
)V
2
b
l
(x
i
), (26)
V
2
j
j
(x
i
) =
m
k;l=1
Y
kl
(x
i
)b
k
(x
j
)V
2
b
l
(x
j
), (27)
or in a matrix form
D
k
[j
i
(x
i
)] = s
1
b
(x
i
)
M
1
(x
i
)D
k
[b(x
i
)], (28)
D
k
[j
j
(x
i
)] = W(x
i
x
j
) B H(x
i
) D
k
[b(x
i
)], (29)
with
k = 1; 2,
D
k
:=
q
k
qx
k
,
s = b
(x
i
)
M
1
(x
i
)b(x
i
),
H(x
i
) =
M
1
(x
i
) s
1
(
M
1
(x
i
) b(x
i
))
Q(
M
1
(x
i
) b(x
i
))
.
4. Numerical convergence study
To check the efciency of the method a model Dirichlet
boundary value problem with a known analytical solution
has been solved in a strong form using the presented
meshfree collocation approach.
For the detailed description of the meshfree collocation
approach based on IMLS as well as the implementation of
boundary conditions we refer to [7].
We solved the boundary value problem
Du = 0 on 0ox
1
; x
2
o1
with boundary values
u(x
1
; 0) = sin(px
1
),
u(0; x
2
) = sin(px
2
),
u(1; 0) = u(0; 1) = 0.
The exact solution to this problem is given by
u(x
1
; x
2
) =
sinh(p(1 x
1
))
sinh p
sin(px
2
)
sinh(p(1 x
2
))
sinh p
sin(px
1
).
We used a quadratic basis and N uniformly distributed
points in the domain with a meshsize h. We have computed
relative errors
Err:=
u
appx
u
ex
u
ex
o
,
ARTICLE IN PRESS
H. Netuzhylov / Engineering Analysis with Boundary Elements 32 (2008) 512516 515
where u
appx
is the numerical solution, and u
ex
the exact
solution, and the experimental order of convergence
EOC:=
log[Err
h
1
=Err
h
2
]
log[h
1
=h
2
]
.
Fig. 2 shows the relative error of the approximated solution
Err vs. the meshsize h for different parameter a of the
weight function (6). The results are presented in Table 1.
We obtained the third order of convergence.
5. Conclusions
In this paper we presented a technique for enforcement
of boundary conditions in meshfree methods based on
interpolating moving least squares. The usage of singular
weights on one hand has guaranteed the interpolating
properties of the kernel functions used for the solution of
PDEs, but on the other hand we run into a problem of
singular matrix inversion. The inverse is carried out
using the regularized technique, and a stable inverse is
obtained when the regularization parameter vanishing is
considered.
We have solved a model boundary value problem with
Dirichlet boundary conditions, and obtained the expected
third order of convergence. The implementation of the
presented method is straightforward and less computa-
tionally costly in comparison with the weighted-residual
methods, since no integration is required.
References
[1] Lancaster P, S