Paintings & Photography

6 pages

Influence of topology on the performance of a neural network

of 6
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Influence of topology on the performance of a neural network
   Neurocomputing 58–60 (2004) 229– Inuence of topology on the performance of aneural network  J.J. Torres ∗ , M.A. Mu˜noz, J. Marro, P.L. Garrido Department of Electro-magnetism and Matter Physics, and Institute Carlos I for Theoretical and Computational Physics, University of Granada, Granada E-18071, Spain Abstract We studied the computational properties of an attractor neural network (ANN) with dierentnetwork topologies. Though fully connected neural networks exhibit, in general, a good perfor-mance, they are biologically unrealistic, as it is unlikely that natural evolution leads to such alarge connectivity. We demonstrate that, at nite temperature, the capacity to store and retrieve binary patterns is higher for ANN with scale-free (SF) topology than for highly random-dilutedHopeld networks with the same number of synapses. We also show that, at zero tempera-ture, the relative performance of the SF network increases with increasing values of the distri- bution power-law exponent. Some consequences and possible applications of our ndings arediscussed.c   2004 Elsevier B.V. All rights reserved. PACS:  87.10.+e; 05.10. − a; 05.50.+q Keywords:  Scale-free topology; Autoassociative networks; Storage capacity 1. Introduction There is a growing interest in evolving complex networks, in particular, networkswith scale-free (SF) topology [1,2,5]. SF occurs in many dierent contexts, includ- ing the www and the Internet, e-mail and scientic-citation networks, ecological, pro-tein and gene interaction networks, etc. In these examples, the degree  k   of a vertex,i.e., the number of arcs linking it to other vertices, is power-law distributed,  P  ( k  )  ∼ k  −  (see Fig. 1). This implies that the network includes a relatively large number  of nodes with small connectivity, dening what we call the network   boundary , and ∗ Corresponding author. Tel.: +34-958-24-4014; fax: +34-958-242-353. E-mail address: (J.J. Torres).0925-2312/$-see front matter  c   2004 Elsevier B.V. All rights reserved.doi:10.1016/j.neucom.2004.01.048  230  J.J. Torres et al./Neurocomputing 58–60 (2004) 229–234 a few nodes, the  hubs , with a large connectivity, comparable to the network size  N  .As a consequence, SF networks exhibit the interesting  small-world   property, that is,the average path length between two nodes is very small compared to the network size.Evolving networks with such complex topology are also common in biology. Neu-ronal networks, for instance, seem to exhibit the small-world property. This was re-cently demonstrated in a set of in vitro experiments of growing cultured neurons [6]. Although an impressive amount of work has been done in the last few years concern-ing SF networks, it has only recently been reported on the specic consequences of such an architecture on the performance of auto-associative neural networks [3,7]. The authors in [7] show that a SF neural network is able to store and retrieve  P   patternswith a lower computer-memory cost than the fully connected Hopeld neural network.They also nd a similar performance with a (biologically unrealistic) nearest-neighbor hypercubic Ising lattice. The authors in [3] study the zero temperature behavior of  dierent topologies, namely, the Barabasi–Albert (BA) SF, small-world and randomdiluted networks, and a better performance for the random diluted case than for theother topologies is reported. However, for the relative large mean connectivity theseauthors use (  k   =50), the BA network has not the SF property [7], so that this result lacks interest.We here report on the inuence of topology on the associative-memory task of the network as a function of temperature. In particular we focus on two main issues,namely, on the robustness of the network performance against thermal noise for varyingtopology, and on the eect of varying the SF connectivity distribution  P  ( k  ) on thenetwork performance. 2. Denition of models Consider the BA  evolving  network [1] with  N   nodes and   (  N   −  0 ) links. Here,   0  isthe initial number of nodes generating the network,   6  0  is the number of links thatare added during the  evolution  at each time step, and  N   is the nal number of nodesin the network. This will latter be generalized to consider other SF networks. In order to have a neural system with the chosen topology, we place a binary neuron,  s i  = 1or 0, at each node  i , and then “store”  P   binary random patterns,    ≡ {  i  = 1 or 0 } ,   =1 ;:::;P  , with mean activity level    i   =  12 . This is done in practice by associatinga synaptic intensity  ! ij  at each link according to the Hebbian learning rule, ! ij  = 1  N   P    =1 (2  i  −  1)(2   j  −  1) :  (1)A meaningful direct comparison of this SF neural network (SFNN) and the standardHopeld neural network cannot be made because the second case involves  N  2 synap-tic connections. A hypercubic Ising lattice has the same number of synapses than theSFNN; however, real neural systems are known to exhibit more complex neuron con-nectivity than the Ising network. Consequently, we compare the performance of the  J.J. Torres et al./Neurocomputing 58–60 (2004) 229–234  231 1e-050.00010.0010.010.111 10 100 1000    P   (   k   ) k 1e-050.00010.0010.010.1110 100   p   (   k   )  ~   k   -      γ k γ  =1 γ  =2 γ  =3 γ  =4 Fig. 1. Left: Connectivity probability distributions for a SF network with   =3 (lled circles) and for a HDHnetwork (open circles). Right: Connectivity probability distributions generated for a SF network by tuningthe exponent   . Each data point corresponds to an average over 100 networks with  N   =4000 neurons each. 4 6 8 10   m    1 T SFNNHDHN-0.2500.250.50.75140 80 120   m   (   k   ) k Fig. 2. Left: Overlap curves for varying temperature (in arbitrary units) for a BA network (lled squares)and a HDHN (lled circles). Data correspond to an average over 100 realizations of a network with  N  =1600neurons,  P  =1 and   =  0 =3. Right: Local overlap  m ( k  ) for a BA network as a function of the connectivity k   for increasing values of the temperature (successive decreasing curves). SFNN with that of a  highly diluted   Hopeld network (HDHN). The HDHN is ob-tained from the standard Hopeld network by randomly suppressing synapses untilonly   (  N   −   0 ) of them remain, i.e., the number of synapses scales as  N   and not as  N  2 . To maintain the SF behavior in the BA network, the value of     must be verysmall compared to the network size, that is,     N   [5]. The connectivity distribution of  the HDHN is illustrated in Fig. 1 (left). The main dierences between this distributionand the corresponding one for a SF network is that the latter has no typical connec-tivity value. More specically, the SF network distribution is a power-law while theHDHN distribution has a maximum and an Gaussian decay and, consequently, may becharacterized by a (typical) mean connectivity.A relevant magnitude to monitor in order to compare the performance of dierenttopologies is the overlap function, dened for pattern    as m  ≡  2  N   i (2  i  −  1)  s i :  (2)The performance of the two networks is compared in Fig. 2 for   P   = 1 and    = 3.This clearly shows that, excluding very low temperature, the retrieval of information  232  J.J. Torres et al./Neurocomputing 58–60 (2004) 229–234 8 12 16 20 γ  =1.5 γ  =2.0 γ  =2.5 γ  = 8 12 16 20   m    f   i  n  a   l Number of patterns -0.15-0.1-0.0500. 8 12 16 20       ∆   m    f   i  n  a   l Number of patterns γ  =3 γ  =2.5 γ  =2.0 γ  =1.5 Fig. 3. Left: Zero temperature performance of a Molloy–Reed SFN for    =1 : 5 ; 2 ; 2 : 5 ; 3 (solid lines) comparedto the performance of dierent HDHN with the same number of synapses (dashed lines), and as a functionof the number of stored patterns  P  . Right: Relative dierence in the performance for the cases showed inthe left panel. as characterized by  m  is better for the SFNN than for the HDHN. In both cases, theretrieval of information deteriorates as  P   is increased. However, we also observe that, atnite temperature, the performance of the SFNN increases signicantly if one considersonly the retrieval of information concerning neurons with a connectivity degree higher than certain value,  k  0 ;  i.e., the hubs [8]. This can be understood on simple grounds  by computing the local overlap  m 1 i  with one pattern for neuron  i  and averaging over all neurons with the same connectivity  k  . The resulting mean overlap for a givenconnectivity  m ( k  ) is plotted in Fig. 2 (right) for dierent temperatures. This showsthat, even at high temperature, the local overlap for hubs is close to one whereas itis very small for the boundary (uctuations in the lower curves are due to the smallnumber of hubs present in the network for the relative small network size we areusing). This nding reects the “negative” role of the boundary and the “positive” roleof the hubs on the SFNN performance during each retrieval experiment when thermaluctuations are considered. This observation is in agreement with the  T   = 0 behavior reported in [3]. Another important issue is how the exponent of the distribution inuences the per-formance of SF network for associative memory tasks. In order to analyze this, westudied networks characterized by dierent power-law exponents. With this end, weused a Molloy–Reed (MR) SF network [4] with  P  ( k  )  ∼  k  −  , where    is a tunable parameter. As illustrated in Fig. 1 (right), the number of neurons in the network witha high connectivity increases as    is decreased.Even more interesting is when one compares the behavior of the MR network withthat of the HDHN in the limit  T   = 0 (cf. Fig. 3). As thermal uctuations are then suppressed, the network performance is only perturbed by the interference among thestored patterns. In order to consider the limit of interest, we started with one of thestored patterns, and computed the state of each neuron according to the deterministicrule  s i ( t  +1)=  ( h i ( t  )). Here,   (  x ) is the Heaviside step function, and  h i  ≡  ( i )  j  w ij  s  j is the local eld associated with neuron  i  with the sum over all neurons  j  connectedto it. At the end of the network evolution, we recorded the overlap,  m nal , with thestarting pattern as a function of the total number of stored patterns  P  . In order tovisualize the dierence in performance between the two types of networks, we dened  J.J. Torres et al./Neurocomputing 58–60 (2004) 229–234  233  m nal  ≡  m SFNNnal  −  m HDHNnal  . Fig. 3 shows how this dierence in performance varieswith the number of stored patterns. The graph illustrates that the SFNN has a better and better performance as compared with the HDHN as the number of stored patternsis increased. This eect is enlarged as    is increased. This can be understood byconsidering the dierent decays of   P  ( k  ) for large  k   in both SFNN and HDHN, andthe fact that for increasing values of    , the relative position of    k    moves to the leftfor both topologies, but due to the power-law decay the eect of the hubs remains for the SFNN.Summing up, the topology of a neural network has a key role in the processes of memorization and retrieval of patterns. In particular, neural networks with scale-freetopology may exhibit a better performance than Hopeld-like networks with the samenumber of synapses distributed randomly over the network. Our study can be useful tounderstand the role of regions with dierent connectivity degrees in real neural systemsduring memorization and retrieval of information. In particular, it may improve our understanding of how uctuations or perturbations in the typical number of synapsesof some brain areas can aect the processing of information and memorization inthese regions. Our study also suggests the convenience of developing new methods tostore the more relevant information into the hubs, increasing in this way the eectivenetwork-performance and eciency. It would be desirable to check our ndings againstexperimental observations on real neural systems focusing on the topology which is built up by natural selection. Acknowledgements We thank Dr. Pastor-Satorras for fruitful suggestions. This work was supported by theSpanish MCyT and FEDER “Ramon y Cajal” contract and Project no BFM2001-2841as well as by the FET Open Project IST-2001-33555 COSIN. References [1] R. Albert, A. Barabasi, Statistical mechanism of complex networks, Rev. Mod. Phys. 74 (1) (2002)47–97.[2] S. Dorogovtsev, J. Mendes, Evolution of networks, Adv. Phys. 51 (2002) 1079–1187.[3] P. McGraw, M. Mezinger, Topology and computational performance of attractor neural networks, Phys.Rev. E 68 (4) (2003) 047102.[4] M. Molloy, B. Reed, A critical-point for random graphs with a given degree sequence, Random Struct.Algorithm 6 (2–3) (1995) 161–179.[5] R. Pastor-Satorras, A. Vespignani, Evolution and Structure of the Internet, Cambridge University Press,Cambridge, 2003.[6] O. She, I. Golding, R. Segev, E. Ben-Jacob, A. Ayali, Morphological characterization of in-vitroneuronal networks, Phys. Rev. E 66 (2) (2002) 021905.[7] D. Stauer, A. Aharony, L. Costa, J. Adler, Ecient hopeld pattern recognition on a scale-free neuralnetwork, Eur. Phys. J. B 32 (3) (2003) 395–399.[8] J. Torres, M. Mu˜noz, J. Marro, P. Garrido, to be published.
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks