_
0 0 0 1 1 1
0 1 1 0 0 1
1 0 1 0 1 0
1 1 0 1 0 0
_
_
_
_
_
_
_
=
2 1 1 1 1 0
1 2 1 1 0 1
1 1 2 0 1 1
1 1 0 2 1 1
1 0 1 1 2 1
0 1 1 1 1 2
_
_
_
_
_
_
_
_
_
_
_
_
C
b
=
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
_
_
_
_
_
_
_
_
_
_
_
_
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1
_
_
_
_
_
_
_
=
4 0 0 0 0 4
0 4 0 0 4 0
0 0 4 4 0 0
0 0 4 4 0 0
0 4 0 0 4 0
4 0 0 0 0 4
_
_
_
_
_
_
_
_
_
_
_
_
IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102 85
doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009
www.ietdl.org
states. Also, if a total number of N bits in the form of 1 or
21, excluding zeros, are known, then it can be postulated
that N=2 1 are the minimum number of correct bits
required in a probe vector to retrieve an information
vector from the memory [4, 5]. Therefore the minimum
number of bits, N
m
, required to retrieve an entity can be
computed as
N
m
=
N
2
_ _
1 (5)
where { is a ceiling function.
For N 1 or 2, the minimum number of bits N
m
2.
For reliable operation, the signal-to-noise ratio, g SNR,
must be much grater than 1.
The number of memory vectors M that can be stored and
retrieved are given as
M
_
,
N
_
g
_ _
(6)
where is a oor function and g =
N=M
_
[25, 26].
For N 2 and g . 1, (6) gives M 1.
Therefore with M 1, the minimum length in bits of a
probe vector is 2 or greater.
However, error correction capability for any associative
memory is vital. For correction of one error, the minimum
Hamming distance among vectors must be at least 3
or greater, but N
m
2 does not have any error correction
ability.
5 Structure and implementation
of a conventional sequential BAM
Let X and Y be associated sets of M bipolar vectors of lengths
N
x
and N
y
bits long, respectively. Each of X
m
and Y
m
vectors forms an associated pair, for all m 1, 2, . . . , M.
The bidirectional memory matrix, W, of a conventional
BAM, is constructed by performing the outer-product
operation on the two associated sets X and Y of bipolar
binary vectors.
Next, for the purpose of comparison, the storage
algorithm, the retrieval process, SNR, storage capacity,
interconnection requirements and stability analysis of the
traditional sequential BAM is carried out.
5.1 BAMs storage algorithm
The storage algorithm for the memory matrix W of a
conventional BAM can be written in the expanded form as
W =
X
1
1
X
1
2
.
.
.
X
1
N
x
_
_
_
_
_
_
_
_
_
Y
1
1
Y
1
2
Y
1
N
y
_ _
X
M
1
X
M
2
.
.
.
X
M
N
x
_
_
_
_
_
_
_
_
_
Y
M
1
Y
M
2
Y
M
N
y
_ _
(7)
The memory matrix Wcan be written in the compact form as
W =
M
m=1
X
m
Y
m, t
(8)
W
t
=
M
m=1
Y
m
X
m, t
(9)
where t signies the transpose of a vector matrix.
W
i j
=
M
m=1
X
m
i
Y
m, t
j
,
i = 1, . . . , N
x
j = 1, . . . , N
y
(10)
where
X
m
= (X
m
1
, . . . , X
m
j
, . . . , X
m
N
x
) (11)
and
Y
m
= (Y
m
1
, . . . , Y
m
i
, . . . , Y
m
N
y
) (12)
5.2 Retrieval analysis
Let
^
X
k
is an imperfect probe vector which is closest, in terms
of Hamming distance, to X
k
that forms an association pair
with Y
k
, and is one of the stored pair of vectors.
The initiation of the retrieval process starts at an initial
time t
0
when a probe vector
^
X
k
is applied as input to the
memory matrix W and it terminates when the current state
after p number of iterations becomes equal to the previous
state at ( p 21)th iteration, where p 1,. . ..,P is the
iteration index. The total output estimate,
~
Y
k
i
, of the ith bit
of Y
k
is given as
~
Y
k
i
= Sgn
N
x
j=1
W
ij
^
X
k
j
_ _
(13)
or
~
Y
k
i
= Sgn
N
x
j=1
M
m=1
Y
m
i
X
m
j
_ _
^
X
k
j
_ _
(14)
86 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102
& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002
www.ietdl.org
The conventional BAM is sequential in nature and is shown
in Fig. 1.
Similarly, the estimate of the ith bit
~
X
k
i
, when
^
Y
k
, an
imperfect probe vector closest, in terms of Hamming
distance, to Y
k
which forms an association pair with X
k
,
and is one of the stored vectors, is given as
~
X
k
i
= Sgn
N
y
j=1
W
ij
^
Y
k
j
_
_
_
_
= Sgn
N
y
j=1
M
m=1
X
m
i
Y
m
j
_ _
^
Y
k
j
_
_
_
_
(15)
The next states
~
X
k
i
and
~
Y
k
i
are given as
~
Y
k
i
(t p Dt) =
0 if
N
x
j=1
W
ij
^
X
k
j
_ _
, 0
1 otherwise
_
_
_
_
(16)
Similarly
~
X
k
i
(t p Dt) =
0 if
N
y
j=1
W
ij
^
Y
k
j
_ _
, 0
1 otherwise
_
_
_
_
(17)
The interactive process terminates after p number of
iterations when the estimates of the current states are equal
to the previous states
~
X
k
i
(t
0
p Dt) =
~
X
k
i
(t
0
(p 1) Dt) (18)
~
Y
k
i
(t
0
p Dt) =
~
Y
k
i
(t
0
(p 1) Dt) (19)
5.3 SNR and capacity analysis of a
conventional BAM
Equation (14) can written in the form as
~
Y
k
i
= Sgn Y
k
i
N
x
j=1
X
k
j
^
X
k
j
N
x
j=1
M
m=k
Y
m
i
X
m
j
^
X
k
j
_ _
(20)
Let h
x
is the Hamming distance between the probe vectors
^
X
k
and the corresponding stored vector X
k
. Therefore (20)
can be written in the form as
~
Y
k
i
= Sgn (N
x
2h
x
) Y
k
i
N
1
j=1
M
m=k
Y
m
i
X
m
j
^
X
k
j
_ _
(21)
In (21), the rst term is a signal and the second term is a
noise [25, 26].
Next, note that the components of stored vectors are
statistically independent. According to central limit
theorem [27, 28], and for large N and M, the second term
in (21) is a noise and consists of a sum of N
x
(M21)
independently, and identically distributed (i.i.d) random
variables each of which is 1 or 21 with equal probability
of 1/2, and can be approximated by a Gaussian
distribution with mean zero and variance, s
2
, which is
given as
s
2
= N
x
(M 1) (22)
The SNR is given as
SNR =
S
s
=
(N
x
2 h
x
)
N
x
(M 1)
_ =
N
x
M 1
_
1
2h
x
N
x
_ _ _ _
(23)
and for N
x
, M 1, and with h
x
0
SNR ~
(N
x
2h
x
)
N
x
M
_
N
x
N
x
M
_ =
N
x
M
_
(24)
In sequential BAMs, X input produces Y as output, and Y
input produces X as output, but not both together.
Therefore conventional BAM is sequential in nature and
the signal component of the SNR is directly proportional
to the square root of the minimum of the lengths N
x
and
N
y
of vectors in sets X and Y. Therefore let N min(N
x
, Figure 1 Structure of a conventional BAM
IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102 87
doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009
www.ietdl.org
N
y
) this gives
SNR =
N
_
M
_ (25)
Note that by doubling the length L N
x
N
y
2N, and as
a result, the SNR increases by a factor of
2
_
. Consequently,
the performance of the sequential BAMs, commonly
proposed [7, 8, 11, 10, 17, 24], falls far short of Hopelds
condition, and therefore the quality of performance of
traditional BAMs is degraded by a factor of about
2
_
as
compared to Hopeld-type memory. Thereby, for
improved performance, the continuity restriction is
commonly proposed.
Next, the analysis of storage capacity of M binary vectors as
a function of their length, N bits, which is the minimum of
N
x
and N
y
, is carried out using the same approach as given
in [25]. The storage capacity of M vectors as a function of
their N-bit length is given as
M =
N
2 log N
(26)
All logs are natural logs.
5.4 Stability analysis of the traditional
sequential BAM
The retrieval process in a traditional sequential BAMs of
outer-product type only includes the coupled associativity
that forms the two-way sequential search. For correct recall,
every stored memory pattern pair (X
m
, Y
m
) for
m 1, . . . , M, should form a local minima on the energy
surface. Therefore it must be shown that every stored
memory pattern pair is asymptotically stable. The BAM is a
simple variant of Hopeld model, and therefore its memory
matrix can be represented as a symmetric memory matrix,
and to analyse the stability characteristics, an energy function
is constructed using the quadratic form approach.
Let each vector pair (X
m
, Y
m
) is concatenated in series
together to form a set of (N
x
N
y
) L bits long
compound vectors Z
m
that is dened as
Z
m
= X
m
Y
m
[ ] = X
m.
.
.
Y
m
_ _
, m = 1, . . . , M (27)
where is a concatenation operator.
Using the outer-product approach, the memory matrix of
size L L is constructed as
T [ ] =
M
m=1
Z
m
Z
m,t
(28)
In order to analyse the various characteristics of the BAM, its
expanded version is obtained as
T [ ] =
X
Y
_
_
_
_
_ X
t
Y
t
_ _
=
X X
t
_ _
X Y
t
_ _
Y X
t
_ _
Y Y
t
_ _
_ _
(29a)
T [ ] =
T
N
x
,N
x
W
N
x
,N
y
W
t
N
y
,N
x
T
N
y
,N
y
_ _
(29b)
where
X
m
= (X
m
1
, . . . , X
m
j
, . . . , X
m
N
x
)
Y
m
= (Y
m
1
, . . . , Y
m
i
, . . . , Y
m
N
y
)
Z
m
= X
m.
.
.
Y
m
_ _
= Z
m
1
, . . . , Z
m
i
, . . . , Z
m
L
_ _
The energy function of memory matrix [T] given by (29a)
may be dened as
E(Z) = E X Z ( ) =
1
2
X
t .
.
.
Y
t
_ _
XX
t
_ _
XY
t
_ _
YX
t
_ _
YY
t
_ _
_ _
X
Y
_
_
_
_
_
(30)
where E X Y ( ) = E X, Y ( ).
In BAM the interconnections of self-sub-matrices
A = [X X
t
] and B = [Y Y
t
], which constitute intraeld
connections, are ignored, and as a result, it loses its
parallelism property as well as its SNR, storage capacity,
performance and reliability characteristics erodes and its
functionality reduces to a conventional sequential BAM
that is always constrained by the continuity requirement.
Therefore in (30) replacing the self-associative memory
matrices [X X
t
] and [Y Y
t
] with null matrices [0], and let
[X Y
t
] [W] and [Y X
t
] [W
t
]. The energy function in
(30) is given as
E X, Y ( ) =
1
2
X
t .
.
.
Y
t
_ _
0 W
W
t
0
_ _
X
Y
_
_
_
_
(31)
The energy function in (31) can be written in the
decomposed form as
E X, Y ( ) =
1
2
X
t
W Y Y
t
W
t
X
_ _
(32)
Retrieval process in a traditional sequential BAM is initiated
when an imperfect probe vector
^
X, is given as input and the
estimate of
~
Y is produced at the output as
~
Y = Sgn W
^
X
_ _
= Sgn
M
m=1
Y
m
X
m,t
_ _
^
X
_ _
(33)
88 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102
& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002
www.ietdl.org
The output estimate of the ith bit,
~
Y
i
, is given as
~
Y
i
= Sgn
M
m=1
Y
m
i
N
x
j=1
X
m,t
j
^
X
j
_ _ _ _
(34)
Similarly
~
X
i
= Sgn
M
m=1
X
m
i
N
y
j=1
Y
m,t
j
^
Y
j
_
_
_
_
_
_
_
_
(35)
For these recall conditions, the energy function from (32) can
be written as [6, 23]
E X, Y ( ) =
1
2
X
t
M
m=1
X
m
Y
m
_ _
Y Y
t
M
m=1
Y
m
X
m
_ _
X
_ _
(36)
or
E X, Y ( ) =
1
2
_
X
t
_
M
m=1
X
m
_
N
y
j=1
Y
m,t
j
Y
j
__
Y
t
_
M
m1
Y
m
_
N
x
j=1
X
m,t
j
X
j
___
(37)
Let
a
m
=
N
x
j=1
X
m,t
j
X
j
and b
m
=
N
y
j=1
Y
m,t
j
Y
j
(38)
a
m
and b
m
can be perceived as weighting functions, and are
computed as the inner product of a probe vector
^
X or
^
Y with
the respective stored vector X
m
or Y
m
.
The conventional BAM functions as a sequential network
and its sequential operation is shown in Fig. 1.
Assume that E X
/
, Y
_ _
is the energy of the next state in
which Y stays the same as in the previous state. For the
recall process of pair (X, Y ) the change in energy, DE
x
, is
given as
DE
x
= E X
/
_ _
E X ( ) (39)
Using (37) and ignoring the factor 1/2
DE
x
= X
/
M
m=1
X
m
N
y
j=1
Y
m
j
Y
j
_
_
_
_
_
_
_
_
X
M
m=1
X
m
N
y
j=1
Y
m
j
Y
j
_
_
_
_
_
_
_
_
= X
/
X
_ _
M
m=1
X
m
N
y
j=1
Y
m
j
Y
j
_
_
_
_
_
_
_
_
_
_
_
_
(40)
It follows from (34) and (35) that
M
m=1
X
m
N
y
j=1
Y
m
j
Y
j
_
_
_
_
= X
/
M
m=1
X
m
N
y
j=1
Y
m
j
Y
_
_
_
_
(41)
where the quantity enclosed in vertical bars, [ [, designates
the absolute value, and is, therefore, positive.
Let
l [ [ =
M
m=1
X
m
N
y
j=1
Y
m
j
Y
j
_
_
_
_
(42)
Therefore DE
x
can be written as
DE
x
_ l [ [X
/
X
/
X
_ _
(43)
Now, the examination of ith bit in (43) shows that
(i) If X
/
i
X
i
_ _
. 0, then X
/
i
= 1 and X
i
= 1; it gives
DE
x
, 0.
(ii) If X
/
i
X
i
_ _
, 0, then X
/
i
= 1 and X
i
= 1; this
implies that DE
x
, 0.
(iii) If X
/
i
X
i
_ _
= 0, then DE
x
= 0.
Therefore it follows from conditions (i) (iii) that DE
x
_ 0.
Similarly, for (Y
m
i
Y
i
), it can be shown that DE
y
is
decreasing, that is, DE X, Y ( ) _ 0.
The difference between energies of the current state
E X, Y ( ) and the next state E X
/
, Y
_ _
is negative. Since
E X, Y ( ) is bounded by 0 _ E(X, Y) _ M(N
y
N
x
) for all X
and Y, the sequential BAM converges to a stable point
where the energy forms a local minimum.
6 Structure and implementation
requirements of the IBAM
In the proposed direct storage sequential bidirectional
memory model IBAM, the vector pairs (X
m
, Y
m
),
m 1, . . . , M, are directly stored in the form of two
(M N
x
) and (M N
y
) memory matrices A and B,
respectively. This direct storage arrangement is actually
functionally equivalent to the outer-product format, but
actual outer-product operation is not performed. The
unique characteristics of direct storage model permit the
addition or deletion of vectors in both sets X and Y
without affecting the other stored vectors. As shown in
Fig. 2, the lengths N
x
and N
y
of stored vectors in sets X
and Y can be increased or decreased. Therefore this
memory model is modular and expandable. It has exible
structure and is capable of implementing from any to any
IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102 89
doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009
www.ietdl.org
directional mapping; this paper considers only the
bidirectional mapping.
6.1 Storage prescription
Let X and Y be two associated sets of M bipolar vectors in
which each pair (X
m
, Y
m
) forms an associated pair of
vectors for all m 1, 2, . . . , M, and the lengths of vectors
in sets X and Y are N
x
and N
y
bits long, respectively. These
N
x
and N
y
bits long M bipolar vectors are directly stored in
memory matrices A and B
t
, which are constructed as follows
A = [X] = X
1
. . . X
m
. . . X
M
_ _
(44)
B = [Y] = Y
1
. . . Y
m
. . . Y
M
_ _
(45)
and
X
m
= X
m
1
X
m
2
X
m
j
X
m
Nx
_ _
, i = 1, 2, . . . , N
x
j = 1, 2, . . . , N
y
Y
m
= Y
m
1
Y
m
2
Y
m
j
Y
m
Ny
_ _
, m = 1, 2, . . . , M
(46)
One or both of the vectors forming an associated pair (X
m
,
Y
m
) can invoked by using any one of
^
X or
^
Y or both
imperfect probe vectors.
6.2 Retrieval process
Let
^
X
k
and
^
Y
k
be the initial probe vectors closest to (X
k
, Y
k
)
pair of stored vectors. The estimate of the ith bit,
~
X
k
i
of X
k
the desired stored vector is obtained, using (44) (46), as
~
X
k
i
= Sgn A
i
B
t
j
^
Y
k
j
_ _ _ _
(47)
or
~
X
k
i
= Sgn
M
m=1
X
m
i
N
y
j=1
Y
m
j
^
Y
k
j
_
_
_
_
_
_
_
_
(48)
where t represents the matrix transpose, and m 1, . . ., M.
Similarly, the estimate of ith bit,
~
Y
k
i
, of the desired vector
Y
k
, may be obtained as
~
Y
k
i
= Sgn B
i
A
t
j
^
X
k
j
_ _ _ _
(49)
or
~
Y
k
i
= Sgn
M
m=1
Y
m
i
N
x
j=1
X
m
j
^
X
k
j
_ _ _ _
(50)
The probe vectors
^
X and
^
Y are given as input to memory
Figure 2 IBAM is modular and expandable
90 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102
& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002
www.ietdl.org
matrices [A] and [B] that as such constitute the preprocessing
blocks A
x
and A
y
in which the weighting coefcients a
m
and
b
m
are computed from (48) and (50) as
a
m
=
N
x
j=1
X
m
j
^
X
j
= N
x
2 h
m
(51)
b
m
=
N
y
j=1
Y
m
j
^
Y
j
= N
y
2 h
m
and m = 1, . . . , M (52)
Using (52) and (53), (49) and (51) can be written as
X
k
i
= Sgn
M
m=1
X
m
i
b
m
_ _
(53)
~
Y
k
i
= Sgn
M
m=1
Y
m
i
a
m
_ _
(54)
The weighting coefcients a
m
and b
m
so computed become
the input to the second set of memory matrices A and B,
which are designated as association selector blocks B
x
and
B
y
to select the output vectors
~
X and
~
Y, respectively. The
implementation structure for A
x
and B
x
blocks is shown in
Figs. 2 and 3.
Therefore the evolution equations of the IBAM can be
written as
~
Y
k
i
=
1 if
M
m=1
Y
m
i
a
m
. 0
1 if
M
m=1
Y
m
i
a
m
_ 0
_
_
(55)
~
X
k
i
=
1 if
M
m=1
X
m
i
b
m
. 0
1 if
M
m=1
X
m
i
b
m
_ 0
_
_
(56)
This is the direct storage model of the sequential bidirectional
memory (IBAM).
6.3 Stability analysis
The functionality of the intraconnected direct storage
bidirectional memory is based on the non-linear feedback
network. For correct recall, every stored memory
pattern pair (X
m
, Y
m
) for m = 1, . . . , M, should form a
local minima on the energy surface. It must be shown that
every stored memory pattern pair is asymptotically
stable. For a sequential model the next state of the pattern
pair (X, Y ), after the rst iteration, is dened from (48)
Figure 3 Implementation structure of IBAM
IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102 91
doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009
www.ietdl.org
and (50) as
~
X = Sgn
M
m=1
X
m
f
N
y
j=1
Y
m
j
Y
j
_
_
_
_
_
_
_
_
_
_
_
_
(57)
~
Y = Sgn
M
m=1
Y
m
g
N
x
j=1
X
m
j
X
j
_ _ _ _ _ _
(58)
For these recall conditions, a possible energy function may be
dened as [23]
E(X, Y) =
M
m=1
X
m
f
N
y
j=1
Y
m
j
Y
j
_
_
_
_
_
_
_
_
X
M
m=1
Y
m
g
N
x
j=1
X
m
j
X
j
_ _ _ _
Y (59)
where f (.) and g(.) are weighting functions and are computed
as the inner product of a probe vector X or Y with the
respective mth stored vector X
m
or Y
m
.
Assume that E (X
/
, Y) is the energy of the next state in
which Y stays the same as in the previous state. For the
recall process of pair (X, Y ), the change in energy, DE
x
, is
given as
DE
x
= E(X
/
) E(X) (60)
or
DE
x
=
M
m=1
X
m
f
N
y
j=1
Y
m
j
Y
j
_
_
_
_
_
_
_
_
X
/
M
m=1
X
m
f
N
y
j=1
Y
m
j
Y
j
_
_
_
_
_
_
_
_
X
=
M
m=1
X
m
f
N
y
j=1
Y
m
j
Y
j
_
_
_
_
_
_
_
_
X
/
X
_ _
_
_
_
_
(61)
It follows from (57) and (58) that
M
m=1
X
m
f
N
y
j=1
Y
m
j
Y
_
_
_
_
_
_
_
_
= X
/
M
m=1
X
m
f
N
y
j=1
Y
m
j
Y
j
_
_
_
_
_
_
_
_
(62)
where the quantity enclosed in vertical bars designates the
absolute value and is represented by Z As.
Let
Z [ [ =
M
m=1
X
m
f
N
y
j=1
Y
m
Y
_
_
_
_
_
_
_
_
(63)
Therefore DE
x
can be written as
DE
x
_ Z [ [X
/
i
X
/
i
X
i
_ _
(64)
Now, the examination of (64) shows that
(i) If X
/
i
X
i
_ _
. 0, then X
/
i
= 1 and X
i
= 1; this gives
DE
x
, 0.
(ii) If X
/
i
X
i
_ _
, 0, then X
/
i
= 1 and X
i
= 1; this
implies that DE
x
, 0.
(iii) If X
/
i
X
i
_ _
= 0, then DE
x
= 0.
Therefore it follows from conditions (i) (iii) that DE
x
_0.
Similarly, for (Y
m
i
Y
i
), it can be shown that DE
y
is
decreasing, that is, DE X, Y ( ) _ 0.
The difference between energies of the current state
E X, Y ( ) and the next state E X
/
, Y
_ _
is negative. Since
E X, Y ( ) is bounded by 0 _ E(X, Y) _ M(N
y
N
x
) for all
X and Y, the IBAM converges to a stable point where the
energy forms a local minimum.
6.4 Analysis of SNR and the storage
capacity
The analysis and estimate of capacity of the IBAM is carried
out using SNR [23, 25, 26]. The amount of noise or the
number of errors in a binary probe vector
^
X may be
measured, in terms of Hamming distance, as the number of
bit values in those bit positions that do not match to the
bit values in the corresponding bit positions in a stored
vector. It follows from (50) that
~
Y
k
i
= Sgn
M
m=1
Y
m
i
N
x
j=1
X
m
j
^
X
j
_ _ _ _
(65)
Assuming that the probe vector
^
X is closest to X
k
that is one
of the stored vector. Then (50) can be written as
~
Y
k
i
= Sgn Y
k
i
N
x
j=1
X
k
j
^
X
j
_ _
M
m=1
m=k
Y
m
i
N
x
j=1
X
m
j
^
X
j
_ _
_
_
_
_
_ (66)
It is assumed that each bit in any of M N-bit long vectors is
an outcome of a Bernoulli trial of being 21 or 1.
Equivalently, all of M N-bit bipolar binary vectors are
chosen randomly.
92 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102
& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002
www.ietdl.org
Using (54) and (66) can be written as
~
Y
k
i
= Sgn
m
m=1
Y
m
i
a
m
_ _
= Sgn Y
k
i
a
k
M
m=1
m=k
Y
m
i
a
m
_
_
_
_
_ (67)
The rst term in (67), Y
k
i
is the ith bit of the stored vector Y
k
,
and can be either 1 or 21. Without loss of generality,
assume that Y
k
i
= 1. The rst term in (66) and (67) is a
signal and the second term is a noise. These are written as
S
k
Z
m
= Y
k
i
a
k
M
m=1
m=k
Y
m
i
a
m
_
_
_
_
_ (68)
The signal power is given as
signal power =S
k
= (a
k
) = Y
k
i
N
x
2 h
k
_ _ _ _
(69)
where h
k
h is the Hamming distance between the probe
vector,
^
X, and X
k
, the desired stored vector that is closest
in terms of Hamming distance, to the probe vector,
^
X.
In general, the second term in (69) is perceived as noise
and consists of a sum of (M21) i.i.d. random variables.
Next, since the probe vector
^
X is closest to the stored
memory vector X
k
, all other (M21) stored vectors must,
at least, have one bit greater Hamming distance, h 1 bits,
from the probe vector
^
X.
Note that the (M21) vectors are N-bits long, and they
form 2
N
possible combinations. One such combination [27,
28] is given as
Y
m
= Y
m
1
Y
m
2
Y
m
i1
Y
n
i
Y
m
i1
Y
m
N
(70)
Therefore the probability of selecting one of these possible
memory vectors is
Prob Y
m
[ ] =
1
2
_ _
N
(71)
As the N-bits long binary vectors are stored in bipolar form,
any probe vector that is less than N/2 bits away [1, 18, 26], in
terms of Hamming distance, from the desired stored vector
can be used for retrieval.
The mean and variance of the second term is (M21)
times the mean and variance of a single variable. These
random variables are dened as
v
i
= Y
m
i
a
m
, m =k and
m = 1, . . . , k 1, k 1, . . . , M (72)
Let
a
1
= a
2
= = a
M
= a (73)
For (M21) vectors, it follows from (51) that
a = N 2 h 1 ( ) ( ) (74)
Therefore
v
1
= Y
1
i
a
v
2
= Y
2
i
a
.
.
.
v
M
= Y
M
i
a
(75)
Since all random variables have the same characteristics,
therefore it is sufcient to compute the mean and variance
of a single variable, v
1
. Next, the probability distribution of
a single random variable v
1
can be computed as
Prob v
1
= 1 = (N 2(h 1))
_ _
=
1
2
_ _
N
N 1
h
_ _
Prob v
1
= 1 = (N 2(h 1))
_ _
=
1
2
_ _
N
N 1
h
_ _
h = 0, 1, . . . , N 1
where h is the Hamming distance between X
m
and
^
X.
Since the random variable v
1
can assume values as 1 or
21 with equal probability, the mean value of v
1
is zero, or
E(v
1
) = 0
The variance of v
1
is computed as
E v
2
1
_ _
= 2
N1
h=0
N 2(h 1) [ ]
2
1
2
_ _
N
N 1
h
_ _
Let N
/
= N 1, and E v
2
1
_ _
can be written as
E v
2
1
_ _
= 2
N1
h=0
N
/
1
_ _
2h
_ _
2
1
2
_ _
N
N
/
h
_ _
E v
2
1
_ _
= 2
1
2
_ _
N
_
N
/
h=0
(N
/
1)
2
2 h ( )
2
_
2 2 h N
/
1
_ _ _ __ N
/
h
_ _
_
(76)
The expression in (76) is evaluated [27, 28] as
(a)
N
/
h=0
N
/
1
_ _
2 N
/
h
_ _
= N
/
1
_ _
2
2
N
/
IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102 93
doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009
www.ietdl.org
(b)
N
/
h=0
4 h
2
N
/
h
_ _
= 4 N
/
N
/
1
_ _
2
N
/
2
_ _
= N
/
N
/
1
_ _
2
N
/
(c) 4 N
/
1
_ _
N
/
h=0
h
N
/
h
_ _
= N
/
1
_ _
2
2
N
/
2
N
/
1
_ _
= 2 N
/
N
/
1
_ _
2
N
/
Substituting (a) (c) into (76), the result is obtained as
E(v
2
1
) =2
_
1
2
_
N
/
_
((N
/
1)
2
N
/
(N
/
1)
2N
/
(N
/
1))2
N
/ _
or
E v
2
1
_ _
= 2
1
2
_ _
N
N
/
1
_ _
N
/
_ _
2
N
/
_
2
N
/
_ _
= 2
1
2
_ _
N
N
/
1
_ _
2
N
/
_ _
or
E v
2
1
_ _
= 2
1
2
_ _
N
N
/
1
_ _
2
N
/
= 2
1
2
_ _
N
N2
N1
Therefore the mean and variance of the second term is
(M21) times the mean and variance of a single variable
Z
m
= M 1 ( )N (77)
Therefore the capacity of IBAM in terms of SNR is given as
SNR =
signal power
2
1
2
_ _
N
2
N1
N M 1 ( )
_ =
N 2h ( )
M 1 ( )N
_
N 2h ( )
MN
_
(79)
From (79), it follows that for a Hamming distance of
h = N=2, the SNR and storage capacity reduces to zero
and it is maximum when h 0.
With the Hamming distance of h 0, the SNR from (79)
may be approximated as
SNR =
S
s
=
(N 2h)
MN
_
N
M
_
(80)
Let M
/
be the number of stored vectors if the probe vector
has no error, then h 0, and the SNR is given as
SNR
N
2
M
/
1 ( )N
N
M
/
(81)
Therefore to maintain the same recall capability when h
number of errors is present in the probe vector, the SNR in
(79) and (81) must be equal. Therefore
N
M
/
=
N 2h ( )
2
MN
M = 1
2h
N
_ _
2
M
/
(82)
where M
/
is the number of stored vectors when probe vector
has no error.
Equation (81) shows that the storage capacity, M, reduces
as a square function of the number of errors h, and the effect
of h reduces as inverse function of the length N bits of the
stored vectors.
Next, the analysis of storage capacity of M binary vectors as
a function of their length, N bits, can be estimated using the
approach given in [25]. Therefore the capacity M is given as
M =
N
2 log N
(83)
Also, because of Hopeld [1, 18], the M stored vectors are
related to their N-bit length as
M =
N
7
_ M _
N
6
0:15N (84)
Note that the storage capacity increases if the length N bit of
the stored vectors is increased.
7 Implementation and
computational comparison of
IBAM, BAM, and HAM
The implementation, functional and computational
comparison of the proposed model of a direct storage
bidirectional memory, IBAM, is carried out and it is shown
that the proposed IBAM has far less implementation and
computational requirements, far superior performance and
is functionally equivalent to the sequential, intraconnected
and other models of BAMs of outer product, and the
concatenated memory of Hopeld type, HAM. The
superiority of IBAM is demonstrated by means of examples.
94 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102
& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002
www.ietdl.org
7.1 Functional and structural equivalence
of IBAM, IAM with BAM and HAM
It is shown that it is functionally equivalent to a broad class of
traditional sequential bidirectional memories of outer-
product type.
It follows from (48) and (50) that
~
X
k
i
= Sgn
M
m=1
X
m
i
N
y
j=1
Y
m
j
^
Y
k
j
_
_
_
_
_
_
_
_
(85)
Equation (85) can be rewritten as
~
X
k
i
= Sgn
N
y
j=1
M
m=1
X
m
i
Y
m
j
_ _
^
Y
k
j
_
_
_
_
(86)
or
~
X
k
i
= Sgn
N
y
j=1
W
t
ij
_ _
^
Y
k
j
_
_
_
_
(87)
where
W
t
_ _
=
M
m=1
X
m
Y
m
(88)
In an auto-associative memory of Hopeld type, a set X
consisting of N-bit long M bipolar binary vectors are
stored. The Hopeld-type associative memory, HAM, is
constructed by performing the outer-product operation on
M bipolar binary vectors with themselves. Therefore
replacing the M vectors in set Y with M vectors in set X,
from (85), the memory matrix, T, of HAM can be written as
~
X
k
i
= Sgn
M
m=1
X
m
i
N
x
j=1
X
m
j
^
X
k
j
_ _ _ _
(89)
Following the same steps as those for BAM, (89) can be
written as
~
X
k
i
= Sgn
N
x
j=1
M
m=1
X
m
i
X
m
j
_ _
^
X
k
j
_ _
(90)
Equation (90) can be written as
~
X
k
i
= Sgn
N
y
j=1
T
t
ij
_ _
^
X
k
j
_
_
_
_
(91)
where T is the memory matrix of Hopeld type.
Therefore the direct storage memory, improved auto-
associate memory (IAM), given in (89), is equivalent to the
Hopeld-type memory, HAM, given in (91).
If in the analysis of IBAM and BAM, the Y set of vectors
are replaced with the X set of vectors, then the analysis
for IAM and HAM is the same as that for the IBAM and
BAM.
Note that the memory matrix [W] and [W
t
] given in
(87) and (88) for IBAM are the same as that given by (8)
and (9), and these constitute the direct storage IBAM,
which is functionally equivalent, and possesses all the
attributes of that of the traditional sequential
bidirectional memory of outer-product type commonly
reported in the literature.
The SNR and storage capacity of IBAM as given by
(80) and (83) are equal to that of BAM given by (25) and
(26).
7.2 Comparison of the convergence
process
In direct storage BAM the energy E(X, Y ) is bounded by 0 _
E(X, Y) _ M(N
y
N
x
), and the energy in outer-product-
type BAM is bounded by 0 _ E(X, Y) _ 2M(N
y
N
x
).
Clearly, the energy level in outer-product-type memories
is much higher than the energy level in direct storage-type
memories. Therefore the lower energy levels will stabilise
relatively at a faster rate than the higher energy levels.
Consequently, the direct storage BAM, considered in
this paper, will converge faster than the memories of
outer-product type. Therefore stability characteristics of
IBAM are to BAM or HAM (Hopeld-type associative
memory).
7.3 Implementation and computational
requirements of a conventional BAM
The construction of a traditional BAM of outer-product type
requires two (N
x
N
y
) memory matrices, W and W
t
, that
consist of 2(N
x
N
y
) or O(N
2
) interconnections with weight
strength ranging between +M, each interconnection
needs log
2
M 1 bits of memory storage, and M is the
number of stored vectors.
In each complete cycle of recall, both the matrix Wand its
transpose, W
t
, are used in each iteration, therefore, the
construction of matrix W for M stored bipolar vectors,
requires (M21) N
x
N
y
additions, M N
x
N
y
multiplications
and 2N
x
N
y
interconnections. In each iteration when a
^
Y
k
probe vector is used, the estimate,
~
X
k
, of the corresponding
stored vector X
k
is obtained, and is given in (92). Similarly,
when the input probe vector
^
X
k
is used, the estimate,
~
Y
k
,
of the corresponding stored vector Y
k
is obtained at the
IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102 95
doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009
www.ietdl.org
output. This completes one complete cycle
~
X
k
1
~
X
k
2
.
.
.
~
X
k
N
x
_
_
_
_
_
_
_
_
_
= Sgn
W
11
W
12
W
1 N
y
W
21
W
22
W
2 N
y
.
.
.
.
.
.
W
N
x
1
W
N
X
2
W
N
x
N
y
_
_
_
_
_
_
_
_
_
^
Y
k
1
^
Y
k
2
.
.
.
^
Y
k
N
y
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
(92)
This operation requires N
x
N
y
multiplications and
N
x
(N
y
21) additions. Now assume that to achieve stability,
P number of iterations is required. Then, a complete cycle
requires 2PN
x
N
y
multiplications and 2PN
x
(N
y
21)
additions. Thus the conventional BAM models require
(M 2P)N
x
N
y
multiplications and (M 2P 21)
N
x
N
y
22PN
x
additions, and 2N
x
N
y
interconnections.
7.4 Implementation and computational
requirements of an IBAM
The construction of direct storage sequential BAM (IBAM)
requires 2M(N
x
N
y
) that is O(N) or about 30% of
interconnections with weight strength ranging between +1,
and each interconnection needs 2 bits of memory storage.
It is computationally very efcient as compared to
sequential, intraconnected and other models of BAMs of
outer-product type.
It has simpler, modular and expandable structure, and is
implementable in VLSI, optical, nanotechnology and
software. The implementation structure of IBAM is shown
in Figs. 2 and 3.
The proposed direct storage model is an improved
bidirectional memory that requires the computation of M
number of a and M number of b coefcients. The a
coefcients require MN
x
multiplications and M(N
x
21)
additions, and b coefcients require M N
y
multiplications and
M(N
y
21) additions. So, for P number of iterations required
to achieve stability, a complete cycle requires 2PM(N
y
N
x
)
multiplications and 2PM(N
x
N
y
22) additions.
The concatenated bidirectional memory of Hopeld type
will require P(N
x
N
y
)
2
.
7.5 Examples: comparison of
implementation and computational
performance of IBAM, IAM, BAM and HAM
Let M 15, P 20, N
x
60 bits and N
y
40 bits. The
interconnection requirements, range of weight strengths and
multiplication operations of these models are computed.
Clearly it demonstrates the superiority of performance and
the simplicity of its structural implementation of the IBAM
proposed in this paper, and it is shown in Table 1.
7.5.1 Example 1: comparative performance of
IBAM and BAM: Consider, as an example of a direct
storage bidirectional memory that stores the associated sets
X and Y consisting of four pairs of bipolar vectors given as
(see (93a and b))
Using the
^
X probe vector that is closets, in terms of
Hamming distance, to the stored vector X
1
that form a
pair (X
1
, Y
1
) is given as
^
X = 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0
(94)
The alpha coefcients are computed as
a
1
= 4
a
2
= 0
a
3
= 2
a
4
= 0
^
X
1
a
1
Y
1
Y
1
b
1
X
1
X
1
a
2
Y
1
Clearly, the vectors X
1
and Y
1
that form an associated pair
are obtained, in about three iterations, as
X
1
=
Y
1
=
1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
1 1 1 1 0 0 0 0 1 1
_ _
In order to construct the sequential BAM, W, of outer-
product type, the outer-product operation is performed on
X [ ] =
X
1
X
2
X
3
X
4
_
_
_
_
_
_
=
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
_
_
_
_
_
_
(93a)
Y [ ] =
Y
1
Y
2
Y
3
Y
4
_
_
_
_
_
_
_
=
1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1
_
_
_
_
_
_
_
(93b)
96 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102
& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002
www.ietdl.org
the vectors in sets X and Y. The memory matrix W thus
obtained is given in (95)
4 2 2 2 0 2 0 2 4 0
2 0 0 4 2 0 2 0 2 2
2 0 0 0 2 0 2 4 2 2
2 4 0 0 2 0 2 0 2 2
0 2 2 2 4 2 0 2 0 0
2 0 0 0 2 0 2 4 2 2
0 2 2 2 0 2 4 2 0 4
2 0 4 0 2 4 2 0 2 2
4 2 2 2 0 2 0 2 4 0
0 2 2 2 0 2 4 2 0 4
0 2 2 2 0 2 0 2 0 0
2 4 0 0 2 0 2 0 2 2
2 4 0 0 2 0 2 0 2 2
0 0 2 2 0 2 0 2 0 0
0 0 2 2 0 2 4 2 0 4
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
(95)
Using the same probe vector given in (94), the retrieval
sequence is given as
^
X Y
1
X
1
Y
1
So in about three iterations, the correct stored pair (X
1
, Y
1
) is
obtained.
For the case of HAM, the two sets of vectors X and Y are rst
concatenated together and then the outer product operation is
performed to obtain the memory matrix, T. It is assume that
it will take about three iterationssame as needed for the BAM.
The comparative implementation and computational
requirements have been tabulated in Table 1.
7.5.2 Example 2: comparative performance of
IBAM, IAM and HAM: In order to demonstrate the
implementation and computational requirements for the
storage and retrieval of information in a completely
connected auto-associative memory, consider a set of
M 4 bipolar vectors [X
1
X
2
X
3
X
4
], each 20 bit long
(N 20) given as [19] (see (96))
The corresponding memory matrix T is computed as
(see (97))
Consider a probe vector
^
X
k
given as
^
X
k
= [0 1 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0] (98)
The initiation of retrieval process starts when a probe vector
^
X
k
is applied as input to the memory matrix T.
The iterative retrieval process using the
^
X
k
in (98) gives the
result after rst iteration as
X
1
X
2
X
3
X
4
_
_
_
_
_
_
=
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
_
_
_
_
_
_
(96)
T =
4 2 0 2 2 2 0 2 0 4 0 4 0 0 2 2 2 0 4 0
2 4 2 0 0 0 2 0 2 2 2 2 2 2 4 0 0 2 2 2
0 2 4 2 2 2 4 2 0 0 4 0 4 0 2 2 2 0 0 0
2 0 2 4 0 0 2 0 2 2 2 2 2 2 0 4 0 2 2 2
2 0 2 0 4 4 2 0 2 2 2 2 2 2 0 0 4 2 2 2
2 0 2 0 4 4 2 0 2 2 2 2 2 2 0 0 4 2 2 2
0 2 4 2 2 2 4 2 0 0 4 0 4 0 2 2 2 0 0 0
2 0 2 0 0 0 2 4 2 2 2 2 2 2 0 0 0 2 2 2
0 2 0 2 2 2 0 2 4 0 0 0 0 4 2 2 2 4 0 4
4 2 0 2 2 2 0 2 0 4 0 4 0 0 2 2 2 0 4 0
0 2 4 2 2 2 4 2 0 0 4 0 4 0 2 2 2 0 0 0
4 2 0 2 2 2 0 2 0 4 0 4 0 0 2 2 2 0 4 0
0 2 4 2 2 2 4 2 0 0 4 0 4 0 2 2 2 0 0 0
0 2 0 2 2 2 0 2 4 0 0 0 0 4 2 2 2 4 0 4
2 4 2 0 0 0 2 0 2 2 2 2 2 2 4 0 0 2 2 2
2 0 2 4 0 0 2 0 2 2 2 2 2 2 0 4 0 2 2 2
2 0 2 0 4 4 2 0 2 2 2 2 2 2 0 0 4 2 2 2
0 2 0 2 2 2 0 2 4 0 0 0 0 4 2 2 2 4 0 4
4 2 0 2 2 2 0 2 0 4 0 4 0 0 2 2 2 0 4 0
0 2 0 2 2 2 0 2 4 0 0 0 0 4 2 2 2 4 0 4
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
(97)
IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102 97
doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009
www.ietdl.org
First iteration
[ 2 4 6 4 8 4 6 4 2 2 10 2 10 2 0
8 8 2 2 2]
After thresholding
[0 0 1 1 0 1 1 0 1 0 1 0 0 0 1 0 1 1 1 0]
Second iteration
[ 4 4 12 4 16 12 12 12 8 4 12 4 16
12 0 8 12 8 0 12]
After thresholding
[0 1 1 1 0 1 1 0 1 0 1 0 0 0 1 0 1 1 1 0]
Stable state is reached as
[0 1 1 1 0 1 1 0 1 0 1 0 0 0 1 0 1 1 1 0]
This is one of the stored vectors and is obtained in three
iterations.
Note that the memory matrix T is a fully connected matrix.
It is 20 20, has 400 number of interconnections and the
strength of each interconnection ranges between 2M and
M, where M is number of stored vectors.
For IAM, the set of four bipolar binary vectors arranged in
the form of a (M N
x
) or (4 20) memory matrix, and
then using the
^
X probe vector given in (98), that is closest,
in terms of Hamming distance to the stored vector X
4
,
and in two iterations the vector X
4
is correctly retrieved.
The respective performances comparison is tabulated in
Table 1.
7.5.3 Examples 3: comparison of implementation
and computational performance of IBAM, IAM,
BAM and HAM: Let M 15, P 20, N
x
60 bits and
N
y
40 bits. The interconnection requirements, range of
weight strengths and multiplication operations of these
models are computed. Clearly, it demonstrates the
superiority of performance and the simplicity of its
structural implementation of the IBAM, proposed in this
paper, and it is shown in Table 1.
7.6 Capacity analysis
The traditional BAM is of outer-product type and requires
O(N
2
) interconnections. It requires (2N
x
N
y
) or with
N
x
N
y
, 2N
2
interconnections, and stores M(N
x
N
y
) or
2MN information bits. Therefore the number of
Table 1 Comparison of different BAMs
Type of BAMs Interconnections Range of
weights
Memory requirements Iteration Multiply operations
Bits/Intc. Total
bits
Example 1
IBAM M(N
x
N
y
) 100 +1 2 bits 200 3 PM(N
x
N
y
) 30 000
BAMS 2N
x
N
y
300 +M +4 log
2
M 1 3
bits
900 3 2P(N
x
N
y
) 96 000
PBAMs of
Hopeld type
(N
x
N
y
)
2
625 +M +4 log
2
M 1 3
bits
1875 3 P(N
x
N
y
)
2
200 000
Example 2
IBAM M(N
x
N
y
) 160 +1 2 bits 320 2 PM(N
x
N
y
) 30 000
BAMS 2N
x
N
y
400 +M +4 log
2
M 1 3
bits
1200 3 2P(N
x
N
y
) 96 000
PBAMs of
Hopeld type
(N
x
N
y
)
2
400 +M +4 log
2
M 1 3
bits
1200 3 P(N
x
N
y
)
2
200 000
Example 3
IBAM M(N
x
N
y
) 1500 +1 2 bits 3000 20 PM(N
x
N
y
) 30 000
BAMS 2N
x
N
y
4800 +M +15 log
2
M 1 5
bits
24 800 20 2P(N
x
N
y
) 96 000
PBAMs of
Hopeld type
(N
x
N
y
)
2
10 000 +M +15 log
2
M 1 5
bits
50 000 20 P(N
x
N
y
)
2
200 000
98 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102
& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002
www.ietdl.org
interconnections required to store each bit is obtained as
INT
b
t
=
N
M
2 log
e
N (99)
Similarly, proposed IBAM directly stores the information
vectors and requires O(N) interconnections. It requires
M(N
x
N
y
) or 2MN interconnections and stores
M(N
x
N
y
) or 2MN information bits. Therefore number
of interconnections required to store one bit is obtained as
INT
b
d
= 2
MN
2MN
= 1 intconnections per bit
As shown in Fig. 4, the increase in storage capacity is given as
C
inc
= 2 log
e
N 1 (100)
As shown in Fig. 5, the percentage increase due to the use of
IBAM is obtained as
h
c
( log
e
N 1)
log
e
N
_ _
100% (101)
7.7 Interconnection storage requirements
The traditional BAM with N N
x
N
y
, requires 2N
2
interconnections with weight strength ranging between
+M. Therefore each interconnection requires log
2
M bits
for the magnitude plus one bit for the sign. The total
memory storage requirements is given as
ST
BAM
= 2N
2
log
2
M 1
_ _
bits (102)
Similarly, the IBAM requires M(N
x
N
y
) or 2MN if
N
x
N
y
N, with weight strength ranging between +1.
Therefore each interconnection requires 1 bit for the
magnitude and 1 bit for the sign. The memory storage
requirement in bits is given as
ST
IBAM
= 2 2MN ( ) bits (103)
As a result, net saving in memory storage is given as
ST
save
= 2 N N log
2
M 1
_ _
2M
_ _
(104)
The percentage saving because of the use of IBAM is
obtained as
h
ST
=
N log
2
M 1
_ _
2M
N log
2
M 1
_ _
_ _
100 (105)
8 Simulation and analysis of
comparative performance
The number of binary vectors stored in neural associative
memories is rather small. The M stored vectors are related
to their length of N bits by (83) as
M =
N
2 log N
(106)
Also, because of Hopeld [1, 18], the relation of M stored
vectors with their N-bit length is given as N (67) M.
Therefore
M =
N
7
_ M _
N
6
0:15N (107)
The implementation and computational complexity of these
memory models are compared in terms of their requirements
of interconnections, the number of add and multiply
operations needed for retrieval. Figure 4 Increase in capacity
Figure 5 Percentage increase in efciency
IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102 99
doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009
www.ietdl.org
Let C and D denote the computational loads of traditional
and direct storage BAMs, respectively, and P is the number
of iterations required for retrieval.
For traditional BAMs, the number of interconnections,
INT
t
, the add and multiply operations, C
a
and C
m
are
given as
INT
t
=2N
x
N
y
C
a
=(M 1)N
x
N
y
2PN
x
(N
y
1) MN
x
N
y
2PN
x
N
y
N
x
N
y
(M 2P)
C
m
=MN
x
N
y
2PN
x
N
y
N
x
N
y
(M 2P)
Similarly, for direct storage improved memory IBAM, the
number of interconnections, INT
d
, the add and multiply
operations, D
a
and D
m
are given as
INT
d
= 2M N
x
N
y
_ _
D
a
= 2PM N
x
N
y
z
_ _
2PM N
x
N
y
_ _
D
m
= 2PM(N
y
N
x
) 2PM(N
x
N
y
)
Assuming one multiply operation equals to four add
operations, and with N
x
= N
y
= N and M 0:15 N, the
comparison of interconnections and the total computational
loads C and D, in terms of multiply operations, are
obtained as
C
D
_
5=4 ( )N
2
M 2P ( )
5=4 ( )4PMN
N
4P
N
2M
(108)
INT
t
INT
d
=
2N
2
4 0:15 ( )N
2
= 3:33 (109)
Substituting M from (106) into (108), one obtains
C
D
_
N
4P
log N (110)
Also, substituting M from (107), one obtains
C
D
_
N
4P
10
3
(111)
Equations (26) and (27) are used to predict the simulation
results of the comparative performance of conventional
BAM and the direct storage improved BAM and are
presented in Table 2, which shows the improvement
factors, for example P 30 and N 400, the
improvement factor of IBAM is 9.32 times when compared
to the traditional BAM.
Fig. 6 shows the improvement in performance of IBAM
because of using about only 15% of interconnections as
compared to the conventional BAM.
Table 2 Comparative performance of BAM and IBAM
P/N 20 30 40 60 80 100 250 400 800 1000 1500
1 8.00 10.90 13.69 19.09 24.38 29.61 55.30 105.99 206.68 256.91 382.31
5 4.00 4.90 5.69 7.09 8.38 9.61 15.30 25.99 46.68 56.91 82.31
10 3.50 4.15 4.69 5.59 6.38 7.11 10.30 15.99 26.68 31.91 44.81
15 3.33 3.90 4.36 5.09 5.72 6.27 8.63 12.66 20.02 23.57 32.31
20 3.25 3.78 4.19 4.84 5.38 5.86 7.80 10.99 16.68 19.41 26.06
30 3.16 3.65 4.02 4.59 5.05 5.44 6.96 9.32 13.35 15.24 19.81
40 3.12 3.59 3.94 4.47 4.88 5.23 6.55 8.49 11.68 13.16 16.69
50 3.10 3.55 3.89 4.39 4.78 5.11 6.30 7.99 10.68 11.91 14.81
80 3.06 3.49 3.81 4.28 4.63 4.92 5.92 7.24 9.18 10.03 12.00
100 3.05 3.48 3.79 4.24 4.58 4.86 5.80 6.99 8.68 9.41 11.06
Figure 6 Improvement factor of interconnections
100 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102
& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002
www.ietdl.org
9 Conclusions
This paper presents an efcient and improved model of a
direct storage bidirectional memory, IBAM, and
emphasises the use of nanotechnology for efcient
implementation of such large-scale neural network
structures. This model directly stores the X and Y
associated sets of M bipolar binary vectors, requires about
30% of interconnections and is computationally very
efcient as compared to sequential, intraconnected and
other BAM models of outer-product type. The simulation
results show that it has log
e
N times higher storage
capacity, superior performance, faster convergence and
retrieval time, when compared to traditional sequential and
intraconnected bidirectional memories.
The analysis of SNR and stability of the proposed model
has been carried out.
It is shown that it is functionally equivalent to and
possesses all attributes of a BAM of outer-product type,
and yet it is simple and robust in structure, VLSI, optical
and nanotechnology realisable, modular and expandable
neural network bidirectional associative memory model in
which the addition or deletion of a pair of vectors does not
require changes in the strength of interconnections of the
entire memory matrix.
10 References
[1] HOPFIELD J.J.: Neurons with graded response, have
computational properties like those of two state neurons,
Proc. Natl. Acad. Sci. USA, 1984, 81, p. 3080
[2] TANK D.W., HOPFIELD J.: Simple neural optimization
network an A/D converter, signal decision circuit, and a
linear programming circuit, IEEE Trans. Circuits Syst.,
1986, CAS-33, (5), pp. 533541
[3] AMARI S., MAGINU K.: Statistical neurodynamics of
associative memory, Neural Netw., 1988, 1, (1), pp. 6373
[4] BHATTI A.A.: Analysis of the effectiveness of using
unipolar and bipolar binaries, and hamming distance in
neural network computing, J. Opt. Eng., 1992, 31, (9),
pp. 19721975
[5] BHATTI A.A.: Analysis of performance issues in neural
network based associative memory models (IJCNN, San
Diego, CA, June 1990)
[6] HWANG J.-D., HSIAO F.-H.: Stability analysis of neural-
network interconnected systems, IEEE Trans. Neural
Netw., 2003, 14, (1), pp. 201208
[7] KOSKO B.: Bi-directional associative memories, IEEE
Trans. SMC, 1988, 18, pp. 4960
[8] SIMPSON P.K.: Higher-ordered and interconnected
bidirectional associate memories, IEEE Trans. SMC, 1990,
20, (3), pp. 637653
[9] DENKER J.S.: Highly parallel computation network
employing a binary-valued T matrix and single output
ampliers. United States Patent 4737929, 12 April 1988
[10] WANG T., ZHUANG X., XING X., XIAO X.: A neuron-
weighted learning algorithm and its hardware
implementation in associative memories, IEEE Trans.
Comput., 1993, 42, (5), pp. 636640
[11] CHRIS L.: Computing with nanotechnology may get a
boost from neural networks, Nanotechnology, 2007. DOI:
10.1088/0957-4484/18/36/36502
[12] CLARK N.A.: Method for parallel fabrication of
nanometer scale multi-device structures. United States
Patent 4802951, 7 February 1989
[13] NUGENT A.: Physical neural network design
incorporating nanotechnology. United States Patent
6889216, 3 May 2005
[14] NUGENT A.: Training of a physical neural network.
USPTO Application 20060036559, 16 February 2006
[15] VOGEL V., BAIRD B. (EDS.): Nanotechnology. Report of
the National Nanotechnology Initiative Workshop,
Arlington, VA, USA, 911 October 2003
[16] DANIELE C., COSTANTINI G., PERFETTI R., RICCI E.: Associative
memory design using support vector machines, IEEE
Trans. Neural Netw., 2006, 17, (5), pp. 11651174
[17] WANG T., ZHUANG X., XING X.: Designing bidirectional
associative memories with optimal stability, IEEE Trans.
Syst. Man Cybern., 1994, 24, (5), pp. 778780
[18] HOPFIELD J.J.: Neural networks and physical systems
with emergent collective computational abilities, Proc.
Natl. Acad. Sci. USA, 1982, 79, pp. 24452558
[19] FARHAT N.: Optoelectronics builds viable neural-net
memory, Electronics, 1986, pp. 4144
[20] HOPFIELD J.J.: Electronic network for collective decision
based on large number of connections between signals.
United States Patent 4660166, 21 April 1987
[21] MOOPENN A.W., ANILKUMAR P. THAKOOR, LAMBE J.J.: Hybrid
analog digital associative neural network. United States
Patent 4807168, 21 February 1989
[22] NUGENT A.: Pattern recognition utilizing a
nanotechnology-based neural network. United States
Patent 7107252, 12 September 2006
IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102 101
doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009
www.ietdl.org
[23] COTTREL M.: Stability and attractivity in associative
memory networks, Biol. Cybern., 1988, 58, pp. 129139
[24] WANG L.: Heteroassociatve of spatio-temporal
sequences with the bidirectional associative memory,
IEEE Trans. Neural Netw., 2000, 11, (6), pp. 15031505
[25] MCELIECE R.J., POSNER E.C., RODEMICH E.R., VENKATESH S.S.: The
capacity of the Hopeld associative memory, IEEE Trans.
Inf. Theory, 1987, 33, (4), pp. 461482
[26] WANG J.-H., KRILE T.F., WALKUP J.F.: Determination of the
Hopeld associative memory characteristics using a single
parameter, Neural Netw., 1990, 3, (3), pp. 319331
[27] PAPOULIS A., PILLAI S.U.: Probability, random variables and
stochastic processes (McGraw-Hill, New York, USA, 2002,
4th edn.)
[28] ROSS S.: A rst course in probability (Pearson
Education, Inc., London, UK, 2002, 7th edn.)
102 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102
& The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002
www.ietdl.org