Neurocomputing 58–60 (2004) 229–234www.elsevier.com/locate/neucom
Inuence of topology on the performance of aneural network
J.J. Torres
∗
, M.A. Mu˜noz, J. Marro, P.L. Garrido
Department of Electromagnetism and Matter Physics, and Institute Carlos I for Theoretical and Computational Physics, University of Granada, Granada E18071, Spain
Abstract
We studied the computational properties of an attractor neural network (ANN) with dierentnetwork topologies. Though fully connected neural networks exhibit, in general, a good performance, they are biologically unrealistic, as it is unlikely that natural evolution leads to such alarge connectivity. We demonstrate that, at nite temperature, the capacity to store and retrieve binary patterns is higher for ANN with scalefree (SF) topology than for highly randomdilutedHopeld networks with the same number of synapses. We also show that, at zero temperature, the relative performance of the SF network increases with increasing values of the distri bution powerlaw exponent. Some consequences and possible applications of our ndings arediscussed.c
2004 Elsevier B.V. All rights reserved.
PACS:
87.10.+e; 05.10.
−
a; 05.50.+q
Keywords:
Scalefree topology; Autoassociative networks; Storage capacity
1. Introduction
There is a growing interest in evolving complex networks, in particular, networkswith scalefree (SF) topology [1,2,5]. SF occurs in many dierent contexts, includ
ing the www and the Internet, email and scienticcitation networks, ecological, protein and gene interaction networks, etc. In these examples, the degree
k
of a vertex,i.e., the number of arcs linking it to other vertices, is powerlaw distributed,
P
(
k
)
∼
k
−
(see Fig. 1). This implies that the network includes a relatively large number
of nodes with small connectivity, dening what we call the network
boundary
, and
∗
Corresponding author. Tel.: +34958244014; fax: +34958242353.
Email address:
jtorres@onsager.ugr.es (J.J. Torres).09252312/$see front matter c
2004 Elsevier B.V. All rights reserved.doi:10.1016/j.neucom.2004.01.048
230
J.J. Torres et al./Neurocomputing 58–60 (2004) 229–234
a few nodes, the
hubs
, with a large connectivity, comparable to the network size
N
.As a consequence, SF networks exhibit the interesting
smallworld
property, that is,the average path length between two nodes is very small compared to the network size.Evolving networks with such complex topology are also common in biology. Neuronal networks, for instance, seem to exhibit the smallworld property. This was recently demonstrated in a set of in vitro experiments of growing cultured neurons [6].
Although an impressive amount of work has been done in the last few years concerning SF networks, it has only recently been reported on the specic consequences of such an architecture on the performance of autoassociative neural networks [3,7]. The
authors in [7] show that a SF neural network is able to store and retrieve
P
patternswith a lower computermemory cost than the fully connected Hopeld neural network.They also nd a similar performance with a (biologically unrealistic) nearestneighbor hypercubic Ising lattice. The authors in [3] study the zero temperature behavior of
dierent topologies, namely, the Barabasi–Albert (BA) SF, smallworld and randomdiluted networks, and a better performance for the random diluted case than for theother topologies is reported. However, for the relative large mean connectivity theseauthors use (
k
=50), the BA network has not the SF property [7], so that this result
lacks interest.We here report on the inuence of topology on the associativememory task of the network as a function of temperature. In particular we focus on two main issues,namely, on the robustness of the network performance against thermal noise for varyingtopology, and on the eect of varying the SF connectivity distribution
P
(
k
) on thenetwork performance.
2. Denition of models
Consider the BA
evolving
network [1] with
N
nodes and
(
N
−
0
) links. Here,
0
isthe initial number of nodes generating the network,
6
0
is the number of links thatare added during the
evolution
at each time step, and
N
is the nal number of nodesin the network. This will latter be generalized to consider other SF networks. In order to have a neural system with the chosen topology, we place a binary neuron,
s
i
= 1or 0, at each node
i
, and then “store”
P
binary random patterns,
≡ {
i
= 1 or 0
}
,
=1
;:::;P
, with mean activity level
i
=
12
. This is done in practice by associatinga synaptic intensity
!
ij
at each link according to the Hebbian learning rule,
!
ij
= 1
N
P
=1
(2
i
−
1)(2
j
−
1)
:
(1)A meaningful direct comparison of this SF neural network (SFNN) and the standardHopeld neural network cannot be made because the second case involves
N
2
synaptic connections. A hypercubic Ising lattice has the same number of synapses than theSFNN; however, real neural systems are known to exhibit more complex neuron connectivity than the Ising network. Consequently, we compare the performance of the
J.J. Torres et al./Neurocomputing 58–60 (2004) 229–234
231
1e050.00010.0010.010.111 10 100 1000
P ( k )
k
1e050.00010.0010.010.1110 100
p ( k ) ~ k

γ
k
γ
=1
γ
=2
γ
=3
γ
=4
Fig. 1. Left: Connectivity probability distributions for a SF network with
=3 (lled circles) and for a HDHnetwork (open circles). Right: Connectivity probability distributions generated for a SF network by tuningthe exponent
. Each data point corresponds to an average over 100 networks with
N
=4000 neurons each.
00.250.50.7512 4 6 8 10
m
1
T
SFNNHDHN0.2500.250.50.75140 80 120
m ( k )
k
Fig. 2. Left: Overlap curves for varying temperature (in arbitrary units) for a BA network (lled squares)and a HDHN (lled circles). Data correspond to an average over 100 realizations of a network with
N
=1600neurons,
P
=1 and
=
0
=3. Right: Local overlap
m
(
k
) for a BA network as a function of the connectivity
k
for increasing values of the temperature (successive decreasing curves).
SFNN with that of a
highly diluted
Hopeld network (HDHN). The HDHN is obtained from the standard Hopeld network by randomly suppressing synapses untilonly
(
N
−
0
) of them remain, i.e., the number of synapses scales as
N
and not as
N
2
. To maintain the SF behavior in the BA network, the value of
must be verysmall compared to the network size, that is,
N
[5]. The connectivity distribution of
the HDHN is illustrated in Fig. 1 (left). The main dierences between this distributionand the corresponding one for a SF network is that the latter has no typical connectivity value. More specically, the SF network distribution is a powerlaw while theHDHN distribution has a maximum and an Gaussian decay and, consequently, may becharacterized by a (typical) mean connectivity.A relevant magnitude to monitor in order to compare the performance of dierenttopologies is the overlap function, dened for pattern
as
m
≡
2
N
i
(2
i
−
1)
s
i
:
(2)The performance of the two networks is compared in Fig. 2 for
P
= 1 and
= 3.This clearly shows that, excluding very low temperature, the retrieval of information
232
J.J. Torres et al./Neurocomputing 58–60 (2004) 229–234
0.40.50.60.70.80.914 8 12 16 20
γ
=1.5
γ
=2.0
γ
=2.5
γ
=30.40.50.60.70.80.914 8 12 16 20
m
f i n a l
Number of patterns
0.150.10.0500.050.10.150.24 8 12 16 20
∆
m
f i n a l
Number of patterns
γ
=3
γ
=2.5
γ
=2.0
γ
=1.5
Fig. 3. Left: Zero temperature performance of a Molloy–Reed SFN for
=1
:
5
;
2
;
2
:
5
;
3 (solid lines) comparedto the performance of dierent HDHN with the same number of synapses (dashed lines), and as a functionof the number of stored patterns
P
. Right: Relative dierence in the performance for the cases showed inthe left panel.
as characterized by
m
is better for the SFNN than for the HDHN. In both cases, theretrieval of information deteriorates as
P
is increased. However, we also observe that, atnite temperature, the performance of the SFNN increases signicantly if one considersonly the retrieval of information concerning neurons with a connectivity degree higher than certain value,
k
0
;
i.e., the hubs [8]. This can be understood on simple grounds
by computing the local overlap
m
1
i
with one pattern for neuron
i
and averaging over all neurons with the same connectivity
k
. The resulting mean overlap for a givenconnectivity
m
(
k
) is plotted in Fig. 2 (right) for dierent temperatures. This showsthat, even at high temperature, the local overlap for hubs is close to one whereas itis very small for the boundary (uctuations in the lower curves are due to the smallnumber of hubs present in the network for the relative small network size we areusing). This nding reects the “negative” role of the boundary and the “positive” roleof the hubs on the SFNN performance during each retrieval experiment when thermaluctuations are considered. This observation is in agreement with the
T
= 0 behavior reported in [3].
Another important issue is how the exponent of the distribution inuences the performance of SF network for associative memory tasks. In order to analyze this, westudied networks characterized by dierent powerlaw exponents. With this end, weused a Molloy–Reed (MR) SF network [4] with
P
(
k
)
∼
k
−
, where
is a tunable parameter. As illustrated in Fig. 1 (right), the number of neurons in the network witha high connectivity increases as
is decreased.Even more interesting is when one compares the behavior of the MR network withthat of the HDHN in the limit
T
= 0 (cf. Fig. 3). As thermal uctuations are then
suppressed, the network performance is only perturbed by the interference among thestored patterns. In order to consider the limit of interest, we started with one of thestored patterns, and computed the state of each neuron according to the deterministicrule
s
i
(
t
+1)=
(
h
i
(
t
)). Here,
(
x
) is the Heaviside step function, and
h
i
≡
(
i
)
j
w
ij
s
j
is the local eld associated with neuron
i
with the sum over all neurons
j
connectedto it. At the end of the network evolution, we recorded the overlap,
m
nal
, with thestarting pattern as a function of the total number of stored patterns
P
. In order tovisualize the dierence in performance between the two types of networks, we dened
J.J. Torres et al./Neurocomputing 58–60 (2004) 229–234
233
m
nal
≡
m
SFNNnal
−
m
HDHNnal
. Fig. 3 shows how this dierence in performance varieswith the number of stored patterns. The graph illustrates that the SFNN has a better and better performance as compared with the HDHN as the number of stored patternsis increased. This eect is enlarged as
is increased. This can be understood byconsidering the dierent decays of
P
(
k
) for large
k
in both SFNN and HDHN, andthe fact that for increasing values of
, the relative position of
k
moves to the leftfor both topologies, but due to the powerlaw decay the eect of the hubs remains for the SFNN.Summing up, the topology of a neural network has a key role in the processes of memorization and retrieval of patterns. In particular, neural networks with scalefreetopology may exhibit a better performance than Hopeldlike networks with the samenumber of synapses distributed randomly over the network. Our study can be useful tounderstand the role of regions with dierent connectivity degrees in real neural systemsduring memorization and retrieval of information. In particular, it may improve our understanding of how uctuations or perturbations in the typical number of synapsesof some brain areas can aect the processing of information and memorization inthese regions. Our study also suggests the convenience of developing new methods tostore the more relevant information into the hubs, increasing in this way the eectivenetworkperformance and eciency. It would be desirable to check our ndings againstexperimental observations on real neural systems focusing on the topology which is built up by natural selection.
Acknowledgements
We thank Dr. PastorSatorras for fruitful suggestions. This work was supported by theSpanish MCyT and FEDER “Ramon y Cajal” contract and Project no BFM20012841as well as by the FET Open Project IST200133555 COSIN.
References
[1] R. Albert, A. Barabasi, Statistical mechanism of complex networks, Rev. Mod. Phys. 74 (1) (2002)47–97.[2] S. Dorogovtsev, J. Mendes, Evolution of networks, Adv. Phys. 51 (2002) 1079–1187.[3] P. McGraw, M. Mezinger, Topology and computational performance of attractor neural networks, Phys.Rev. E 68 (4) (2003) 047102.[4] M. Molloy, B. Reed, A criticalpoint for random graphs with a given degree sequence, Random Struct.Algorithm 6 (2–3) (1995) 161–179.[5] R. PastorSatorras, A. Vespignani, Evolution and Structure of the Internet, Cambridge University Press,Cambridge, 2003.[6] O. She, I. Golding, R. Segev, E. BenJacob, A. Ayali, Morphological characterization of invitroneuronal networks, Phys. Rev. E 66 (2) (2002) 021905.[7] D. Stauer, A. Aharony, L. Costa, J. Adler, Ecient hopeld pattern recognition on a scalefree neuralnetwork, Eur. Phys. J. B 32 (3) (2003) 395–399.[8] J. Torres, M. Mu˜noz, J. Marro, P. Garrido, to be published.