|x|. We dene
the modulo lattice operation by x mod xQ
(x). The
978-1-4799-0446-4/13/$31.00 2013 IEEE
2013 IEEE International Symposium on Information Theory
1416
Voronoi cell of , dened by 1() = x : Q
(x) = 0,
species the nearest-neighbor decoding region.
The theta series of (see, e.g., [8]) is dened as
(q) =
2
(1)
where q = e
jz
((z) > 0). Letting z be purely imaginary,
and assuming = (z) > 0, we can alternatively express the
theta series as
() =
2
. (2)
Let us also introduce the notion of lattices which are good
for the Gaussian channel without power constraint:
Denition 1 (AWGN-good). Given > 0 and an n-
dimensional lattice , let W
n
be an i.i.d. Gaussian random
vector of variance
2
such that PW
n
/ 1() = . Consider
the corresponding volume-to-noise ratio (VNR)
) =
(V ())
2
n
(n) (
) = 2e
and if, for a xed VNR greater than 2e, the quantity
PW
n
/ 1()
vanishes in n.
Erez and Zamir [2] showed that lattice coding and decoding
can achieve the capacity of the Gaussian channel. More
precisely, one can prove the existence of a sequence of nested
lattices
(n)
s
(n)
f
such that
- the shaping lattice
(n)
s
is quantization-good,
- the ne lattice
(n)
f
is AWGN-good.
When a random dither at the transmitter and an MMSE
lter at the receiver are used, the Voronoi signal constellation
(n)
f
1(
(n)
s
) approaches the capacity of the mod-
(n)
s
Gaus-
sian channel, and consequently the capacity of the Gaussian
channel, when n is large (see [2]).
B. Lattice Gaussian Distribution
For > 0 and c R
n
, we dene the Gaussian distribution
of variance centered at c R
n
as
f
,c
(x) =
1
(
2)
n
e
xc
2
2
2
,
for all x R
n
. For convenience, we write f
(x) = f
,0
(x).
We also consider the -periodic function
f
,
(x) =
f
,
(x) =
1
(
2)
n
x
2
2
2
, (3)
for all x R
n
. Observe that f
,
restricted to the quotient
R
n
/ is a probability density.
We dene the discrete Gaussian distribution over centered
at c R
n
as the following discrete distribution taking values
in :
D
,,c
() =
f
,c
()
f
,c
()
, ,
where f
,c
()
f
,c
(). Again for convenience, we
write D
,
= D
,,0
.
It will be useful to dene the discrete Gaussian distribution
over a coset of , i.e., the shifted lattice c:
D
c,
( c) =
f
( c)
f
,c
()
, .
Note the relation D
c,
( c) = D
,,c
(), namely, they
are a shifted version of each other.
C. Flatness Factor
The atness factor of a lattice quanties the maximum
variation of f
,
(x) for x R
n
.
Denition 2 (Flatness factor [5]). For a lattice and for a
parameter , the atness factor is dened by:
() max
xR()
[V ()f
,
(x) 1[
where !() is a fundamental region.
In other words, f
,
(x) is within 1
() [5]). We have:
() =
_
()
2
_n
2
_
1
2
2
_
1.
Consider the ensemble of mod-p lattices (Construction A)
[9]. For integer p > 0, let Z
n
Z
n
p
: v v be the element-
wise reduction modulo-p. The mod-p lattices are dened as
C
v Z
n
: v C, where p is a prime and C is a linear
code over Z
p
. Quite often, scaled mod-p lattices a
C
av :
v
C
for some a R
+
are used. The fundamental volume
of such a lattice is V (a
C
) = a
n
p
nk
, where n and k are the
block length and dimension of the code C, respectively.
The following result, derived from the Minkowski-Hlawka
theorem, guarantees the existence of sequences of mod-p
lattices whose atness factors can vanish as n .
Theorem 1 ([5]). > 0 and > 0, there exists a sequence
of mod-p lattices
(n)
such that
(n) () (1 + )
_
(n) ()
2
_n
2
, (4)
i.e., the atness factor goes to zero exponentially for any xed
VNR
(n) () < 2.
2013 IEEE International Symposium on Information Theory
1417
D. Properties of the Flatness Factor
In this subsection we collect properties of the atness factor
that will be useful in the paper. In a nutshell, the atness
factor serves two purposes. On one hand, it makes the folded
distribution (3) at; on the other hand, it smooths the discrete
Gaussian distribution.
From the denition of the atness factor, one can derive the
following result:
Lemma 1. For all c R
n
and > 0, we have:
f
,c
() [1
(), 1 +
()]
1
V ()
.
The following result shows that the variance per dimension
of the discrete Gaussian D
,,c
is not too far from
2
when
the atness factor is small [5].
Lemma 2. Let L be sampled from the Gaussian distribution
D
,,c
. If
_
/
_
1
_
< 1, then
E
_
|L c|
2
_
n
2
2
1
2
.
From the maximum-entropy principle [10, Chap. 11], it
follows that the discrete Gaussian distribution maximizes the
entropy given the average energy and given the same support
over a lattice. This is still so even if we restrict the constella-
tion to a nite region of a lattice. The following lemma further
shows that if the atness factor is small, the entropy rate of
a discrete Gaussian D
,,c
is almost equal to the differential
entropy of a continuous Gaussian vector of variance
2
per
dimension, minus log V (), that of a uniform distribution over
the fundamental region of .
Lemma 3 (Entropy of discrete Gaussian [5]). Let L D
,,c
.
If
_
/
_
1
_
< 1, then the entropy of L satises
H(L)
_
nlog(
2e) log V ()
_
,
where
= log(1 ) +
1
.
III. LATTICE GAUSSIAN CODEBOOK AND ERROR
PROBABILITY
A. The Proposed Scheme
Now we describe the proposed coding scheme based on
the lattice Gaussian distribution for the AWGN channel with
power constraint P. The SNR is given by SNR = P/
2
for noise variance
2
. Let L be an AWGN-good lattice of
dimension n
L
. For the sake of generality, let the codebook be
Lc, where c is a proper shift as is often the case for various
reasons in practice [11]. The encoder maps the information bits
to points in Lc, which obey the lattice Gaussian distribution
D
Lc,0
:
D
Lc,
0
(x) =
1
(
20)
n
L
e
x
2
2
2
0
f
0,c
(L)
, x L c.
We assume the atness factor is small, under certain conditions
to be made precise in the following. Particularly, this means
that the transmission power of this scheme P
2
0
.
Since the lattice points are not equally probable a priori in
the lattice Gaussian signaling, we will use MAP decoding. The
following connection was proven in [5] for the special case
c = 0. For completeness, we extend the proof to the general
case.
Proposition 2 (Equivalence between MAP decoding and
MMSE lattice decoding). Let x D
Lc,0
be the input
signaling of an AWGN channel where the noise variance
is
2
per dimension. Then MAP decoding is equivalent to
Euclidean lattice decoding of Lc using a scaling coefcient
=
2
0
2
0
+
2
, which is asymptotically equal to the MMSE
coefcient
P
P+
2
.
Proof: The received signal is given by y = x+w, where
x Lc and w is the i.i.d. Gaussian noise vector of variance
2
. Thus the MAP decoding metric is given by
P(x[y) =
P(x, y)
P(y)
P(y[x)P(x)
exp
_
|y x|
2
2
2
|x|
2
2
2
0
_
exp
_
1
2
_
2
0
+
2
2
0
2
_
_
_
_
2
0
2
0
+
2
y x
_
_
_
_
2
__
.
Therefore,
arg max
xLc
P(x[y) = arg min
xLc
_
_
_
_
2
0
2
0
+
2
y x
_
_
_
_
2
= arg min
xLc
|y x|
2
(5)
where =
2
0
2
0
+
2
is known, thanks to Lemma 2, to be
asymptotically equal to the MMSE coefcient
P
P+
2
.
Therefore, the MAP decoder is simply given by
x = Q
Lc
(y)
where Q
Lc
denotes, in a similar fashion to Q
L
, the minimum
Euclidean decoder for shifted lattice L c.
B. Error Probability
Let us analyze the average error probability of the MAP
decoder. Suppose x Lc is sent. The received signal after
MMSE scaling can be written as
y = (x +w) = x + ( 1) x + w.
The decoding error probability associated with x is given by
P
e
(x) = 1
_
x+V(L)
1
(
2)
nL
exp
_
|y x|
2
2
2
2
_
dy
= 1
_
V(L)
1
(
2)
nL
exp
_
|y ( 1)x|
2
2
2
2
_
dy
=
_
V(L)
1
(
2)
nL
exp
_
|y ( 1)x|
2
2
2
2
_
dy
2013 IEEE International Symposium on Information Theory
1418
xLc
exp
_
_
_
|y +x|
2
2
4
0
2
0
+
2
_
_
_
_
1
L
_
2
0
_
2
0
+
2
_
, 1 +
L
_
2
0
_
2
0
+
2
___
2
2
0
_
2
0
+
2
_
nL
1
V
. (7)
P
e
_
_
1
L
_
2
0
2
0
+
2
_
1 +
L
(
0
)
,
1 +
L
_
2
0
2
0
+
2
_
1
L
(
0
)
_
_
1
(2
0
)
n
L
_
2
2
0
_
2
0
+
2
_
nL _
V(L)
exp
_
_
_
|y|
2
2
2
0
2
0
+
2
_
_
_
dy
=
_
_
1
L
_
2
0
2
0
+
2
_
1 +
L
(
0
)
,
1 +
L
_
2
0
2
0
+
2
_
1
L
(
0
)
_
_
1
_
2
0
2
0
+
2
_
nL
_
V(L)
exp
_
_
_
|y|
2
2
2
0
2
0
+
2
_
_
_
dy
(a)
1
(
2
0
2
0
+
2
)
n
L
_
V(L)
exp
_
_
_
|y|
2
2
2
0
2
0
+
2
_
_
_
dy
(b)
=
1
(
2 )
nL
_
V(L)
exp
_
|y|
2
2
2
_
dy. (8)
where 1(L) denote the complement of the Voronoi region
1(L) in R
nL
.
The average decoding probability is given by
P
e
=
xLc
1
(
20)
n
L
e
x
2
2
2
0
f
0,c
(L)
P
e
(x)
=
xLc
1
(
20)
n
L
e
x
2
2
2
0
f
0,c
(L)
_
V(L)
1
(
2)
nL
exp
_
|y ( 1)x|
2
2
2
2
_
dy
=
1
(20)
n
L
f
0,c
(L)
xLc
_
V(L)
e
x
2
2
2
0
exp
_
|y ( 1)x|
2
2
2
2
_
dy
=
1
(20)
n
L
f
0,c
(L)
xLc
_
V(L)
exp
_
_
_
2
0
2
|y|
2
+|y +x|
2
2
4
0
2
0
+
2
_
_
_
dy
=
1
(20)
n
L
f
0,c
(L)
_
V(L)
exp
_
_
_
|y|
2
2
2
0
2
0
+
2
_
_
_
xLc
exp
_
_
_
|y +x|
2
2
4
0
2
0
+
2
_
_
_
dy.
(6)
Now the key observation is that, by Lemma 1, the innite
sum over L within the above integral is almost a constant for
any y and any c, as shown in (7) at the top of this page.
Substituting (7) back into (6), and noting that f
0,c
(L)
[1
L
(
0
) , 1 +
L
(
0
)]
1
V
, we derive the expression of P
e
as shown in (8) at the top of this page, where (a) holds under
the conditions
L
_
2
0
2
0
+
2
_
0 and
L
(
0
) 0, and in
(b) we dene
0
2
0
+
2
.
But (8) is just the error probability of standard lattice
decoding for noise variance
2
. Since L is good for AWGN,
P
e
will vanish if
V (L)
2/nL
> 2e
2
. (9)
Moreover, it means that the average error probability is
almost the same as that of Poltyrev [1], including error
exponent etc. (with
2
replaced by
2
). More precisely, the
error probability is bounded by
P
e
e
nLEP ()
. (10)
In the previous formula, E
P
() denotes the Poltyrev exponent
E
P
() =
_
_
1
2
[( 1) log ] 1 < 2
1
2
log
e
4
2 4
8
4
(11)
where =
L( )
e
.
We need to satisfy the conditions
L
_
2
0
2
0
+
2
_
0 and
L
(
0
) 0. Obviously, the rst condition subsumes the
second one. So, for mod-p lattices, we can satisfy it by making
L
_
2
0
_
2
0
+
2
_
=
V (L)
2/nL
4
0
2
0
+
2
< 2, (12)
Conditions (9) and (12) are compatible if
2
0
> e
2
(13)
which is a very mild condition, i.e, the SNR is larger than e.
2013 IEEE International Symposium on Information Theory
1419
C. Rate
Now, to satisfy the volume constraint (9), we choose the
fundamental volume V (L) such that V (L)
2/nL
= 2e
2
(1 +
0.
By Lemma 2, we have
2
0
1
1 +
2
nL(1)
P, (14)
where =
L
_
0
/
_
1
_
< 1.
By Lemma 3, the rate of the code is given by
R log(
2e
0
)
1
n
L
log V (L)
n
L
= log(
2e
0
)
1
2
log
_
2e
2
0
2
0
+
2
_
1
2
log(1 +
n
L
1
2
log
_
1 +
2
0
2
_
1
2
n
L
where
2
_
1
2
log
_
1 +
SNR
1 +
2
nL(1)
_
1
2
log (1 +SNR)
1
2
log
_
1 +
2
n
L
(1 )
_
1
2
log (1 +SNR)
n
L
(1 )
.
Thus,
R
1
2
log (1 +SNR)
n
L
(1 )
1
2
n
L
(15)
1
2
log (1 +SNR)
if < 1 and
2n
L
0
) <
1 +
L
(
0
)
1
L
(
0
)
2
nL
. (16)
Therefore, as long as
L
(
0
) is bounded by a constant, the
outer points need not to be sent, and the sent points can be
drawn from a sphere of radius
2n
L
0
. The peak power
is 2n
L
P asymptotically, which is larger than that of the
Voronoi constellation by a factor 2. Thus, in this aspect, the
lattice Gaussian codebook is not much different from a nite
constellation.
ACKNOWLEDGMENTS
This work was supported in part by FP7 project PHYLAWS.
REFERENCES
[1] G. Poltyrev, On coding without restrictions for the AWGN channel,
IEEE Trans. Inform. Theory, vol. 40, pp. 409417, Mar. 1994.
[2] U. Erez and R. Zamir, Achieving
1
2
log(1+SNR) on the AWGN channel
with lattice encoding and decoding, IEEE Trans. Inform. Theory,
vol. 50, no. 10, pp. 22932314, Oct. 2004.
[3] G. Forney and L.-F. Wei, Multidimensional constellationsPart I: In-
troduction, gures of merit, and generalized cross constellations, IEEE
J. Sel. Areas Commun., vol. 7, no. 6, pp. 877892, Aug 1989.
[4] F. R. Kschischang and S. Pasupathy, Optimal nonuniform signaling for
Gaussian channels, IEEE Trans. Inform. Theory, vol. 39, pp. 913929,
May 1993.
[5] C. Ling, L. Luzzi, J.-C. Belore, and D. Stehl e, Semantically
secure lattice codes for the Gaussian wiretap channel, 2012. [Online].
Available: http://arxiv.org/abs/1210.6673
[6] R. Zamir, Lattice Coding for Signals and Networks: Application and
Design, MIT, USA, Tutorial at ISIT 2012.
[7] G. D. Forney, On the role of MMSE estimation in approaching
the information-theoretic limits of linear gaussian channels: Shannon
meets Wiener, in Proceedings of the 41st Allerton Conference on
Communication, Control and Computing, 2003, pp. 430439.
[8] J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices, and
Groups, 3rd ed. New York: Springer-Verlag, 1998.
[9] H. A. Loeliger, Averaging bounds for lattices and linear codes, IEEE
Trans. Inform. Theory, vol. 43, pp. 17671773, Nov. 1997.
[10] T. M. Cover and J. A. Thomas, Elements of Information Theory. New
York: Wiley, 1991.
[11] G. Forney, M. Trott, and S.-Y. Chung, Sphere-bound-achieving coset
codes and multilevel coset codes, IEEE Trans. Inform. Theory, vol. 46,
no. 3, pp. 820850, May 2000.
[12] R. Zamir and M. Feder, On lattice quantization noise, IEEE Trans.
Inform. Theory, vol. 42, no. 4, pp. 11521159, 1996.
[13] D. Micciancio and O. Regev, Worst-case to average-case reductions
based on Gaussian measures, in Proc. Ann. Symp. Found. Computer
Science, Rome, Italy, Oct. 2004, pp. 372381.
2013 IEEE International Symposium on Information Theory
1420