iDocSlide.Com

Free Online Documents. Like!

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Share

Description

Networks with heterogeneously weighted connections and partial synchronization of nodes

Tags

Transcript

This article was srcinally published in a journal published byElsevier, and the attached copy is provided by Elsevier for theauthor’s beneﬁt and for the beneﬁt of the author’s institution, fornon-commercial research and educational use including withoutlimitation use in instruction at your institution, sending it to speciﬁccolleagues that you know, and providing a copy to your institution’sadministrator.All other uses, reproduction and distribution, including withoutlimitation commercial reprints, selling or licensing copies or access,or posting on open internet sites, your personal or institution’swebsite or repository, are prohibited. For exceptions, permissionmay be sought for such use through Elsevier’s permissions site at:http://www.elsevier.com/locate/permissionusematerial
A u t h o r ' s p e r s o n a l c o p y
Computer Physics Communications 177 (2007) 180–183www.elsevier.com/locate/cpc
Networks with heterogeneously weighted connectionsand partial synchronization of nodes
J. Marro
a
,
∗
, Joaquín J. Torres
a
, Jesús M. Cortés
b
a
Institute “Carlos I” for Theoretical and Computational Physics, University of Granada, 18071 Granada, Spain
b
Institute for Adaptive and Neural Computation, University of Edinburgh, 5 Forrest Hill, EH1 2QL, UK
Available online 22 February 2007
Abstract
A network of stochastic nodes in which connections are heterogeneously weighted and dynamics may by varied from single-node updating tofull synchronization as in familiar cellular automata is studied concerning computational strategies and states of attention in the brain.
©
2007 Elsevier B.V. All rights reserved.
Keywords:
Weighted networks; Cellular automata; Hybrid updating
1. Introduction and model
In addition to a varied topological structure of communica-tion lines, ecological, metabolic and food webs, the Internet andothersocialnets,spin–glassandreaction–diffusionsystems,thebrain, and the central nervous system exhibit two main features.Ononehand,theintensityoftheconnectionsbetweennodesareheterogeneously weighted and may change with time [1–14].That is, ﬂuxes along chains show a broad distribution, agentsmay interchange different amounts of information or money,the transport connections differ in capacity, number of ﬂightsand passengers, diffusion, local rearrangements and reactionsvary the relations between ions, and synapses show complexpatterns of intensities. It also happens rather generally, on theother hand, that not all the nodes are synchronized when a giventask is performed which, more than a matter of economy, isprobably a must [1,3,6]. For example, it seems that, in somecases, only a fraction of neurons are activated in a brain regionat a given time so that the rest may act as sort of working mem-ory [15].Concluding on general properties of partly-synchronizedweighted networks is a difﬁcult goal, however. A main prob-lem is that, as it is seldom recognized in the relevant literature,which is dispersed in different ﬁelds, one needs to deal with
*
Corresponding author.
E-mail address:
jmarro@ugr.es (J. Marro).
fully
nonequilibrium
states. That is, time evolution is towardssituations that cannot settle down into an equilibrium state and,consequently, emergent properties essentially depend on thesystem details [2]. This paper is a brief review of a series of exact and Monte Carlo results concerning a model which is rel-evant to the purpose [16–19]. As an example, we consider herea situation in which dynamics shows attractors that are destabi-lized due to fast activity-dependent synaptic ﬂuctuations. Thisinduces a great sensibility to external stimuli and, for certainparameter values,
switching
and itinerancy which is sometimeschaotic. The system activity thus describes heteroclinic pathsamong attractors in a way that closely resembles some recentlyreported experimental observations [20,21].Let sets of node activities,
σ
≡{
σ
i
=±
1
}
, and communica-tion-line weights,
w
≡ {
w
ij
∈
R
}
,
i,j
=
1
,...,N
. Nodes areacted on by local ﬁelds induced by the weighted action of the
N
−
1 others, i.e.
h
i
(σ,
w
)
=
j
=
i
w
ij
σ
j
. Time evolution is ac-cording to a generalized cellular-automaton strategy: At eachtime unit, one simultaneously updates the activity of
n
vari-ables, 1
n
N
, and the probability of the network activityevolves with time,
t,
according to
P
t
+
1
(σ)
=
σ
′
R(σ
′
→
σ)
·
P
t
(σ
′
)
. The transition rate
R(σ
→
σ
′
)
is a superposition of functions
ϕ(σ
i
→
σ
′
i
= −
σ
i
)
=
12
[
1
−
σ
i
tanh
(βh
i
)
]
, where
β
is an inverse “temperature” to control the process stochasticity.See [19] for details.This generalizes two familiar cases: Sequential (Glauber)updating is for
n
=
1, so that it is obtained approximately in
0010-4655/$ – see front matter
©
2007 Elsevier B.V. All rights reserved.doi:10.1016/j.cpc.2007.02.036
A u t h o r ' s p e r s o n a l c o p y
J. Marro et al. / Computer Physics Communications 177 (2007) 180–183
181
the limit
ρ
≡
n/N
→
0, while parallel (Little) updating is for
n
=
N
, i.e.
ρ
→
1. One may think of situations whose under-standing will beneﬁt from studying the crossover between thesetwo cases. For example, assuming a cell which is exited only inthe presence of a neuromodulator such as dopamine, the pa-rameter
n
will correspond to the number of neurons that aremodulated each cycle. That is, the other
N
−
n
neurons receiveno input but maintain memory of the previous state, which hasbeen claimed to be at the basis of working memories [15].It ensues that time evolution follows the mesoscopic equa-tion
π
µt
+
1
(σ)
=
ρN
i
ξ
µi
tanh
[
βh
i
(σ
;
π
t
,ξ)
]+
(
1
−
ρ)π
µt
(σ)
,
µ
=
1
,...,M
. Here,
ξ
≡ {
ξ
µ
}
stands for a set of
M
learned patterns
,
ξ
µ
={
ξ
µi
=±
1
}
, and
π
≡{
π
µ
(σ)
}
, where
π
µ
(σ)
=
N
−
1
i
ξ
µi
σ
i
measures the
overlap
of the current state withpattern
µ
.
2. Some results
Concluding on the relevant behavior requires a detailedstudy of stability of the steady state for ﬁnite
N
and appropriatecommunication-line weights. The Hopﬁeld–Ising case [22] isoften implemented with ﬁxed weights according to the Hebbprescription, namely,
w
ij
=
N
−
1
µ
ξ
µi
ξ
µj
. In this case, thesystem shows the property of
associative memory
for
ρ
→
0and also, conﬁrming previous partial results [23], for
ρ >
0.That is, for high enough
β
(which means below certain sto-chasticity) and not exceeding some critical
capacity
α
≡
M/N
,the patterns
ξ
µ
are attractors of dynamics. Consequently, aninitial state resembling one of these patterns, e.g., a degradedpicture will converge towards the srcinal one, which mimicsrecognition by the brain [22]. Excluding this case, our modelbehaviorwilldepend,evendramaticallyonthevalueof
ρ
.Moreexplicitly, a main general result of our work is that the attractorsstability for ﬁnite
N
is extremely sensible to the distribution of weights
w
ij
and, for appropriate choices of these, to slight vari-ations of the synchronization parameter
ρ
.The communication lines depend on the speciﬁc situationof interest. Concerning different contexts, one may admit thatthe weights will change with the nodes activity, and also thata given connection may loose some efﬁciency after a time in-terval of heavy work. In fact, this has been reported to occurin the brain, where the transmission of information and manycomputations are strongly correlated with activity-dependentsynaptic ﬂuctuations which induce synaptic
depression
[9,16,24–27]. Motivated by this, we shall assume:(1)
w
ij
=
1
−
(
1
−
Φ)q(π)
N
−
1
M
µ
=
1
ξ
µi
ξ
µj
,
where
q(π)
≡
(
1
+
α)
−
1
µ
π
µ
(σ)
2
. Therefore, Hopﬁeld–Hebb is recovered for
Φ
=
1, while other values of this para-meter correspond to
fast
ﬂuctuations with time (around a typeof Hebb prescription) which induce depression of synapses bya factor
Φ
on the average. This is also consistent with the ob-servation of
synaptic noise
besides the more familiar plasticityof synapses; see, for instance, [9]. It follows that local stability
Fig. 1. The dependence on the
synchronization parameter
ρ
=
n/N
of theLyapunov exponent, as obtained analytically from the saddle-point solution,for
Φ
= −
0
.
2 (solid irregular line) and for the standard Hopﬁeld–Hebb case(dashed line). The value
ρ
c
, as deﬁned in the main text, and the line
λ
=
0are also shown for reference purposes. This is for a single (randomly gener-ated)stored pattern, inverse “temperature”
β
=
20 andin the (nonrealistic) limit
N
→∞
.Fig. 2. Stationary parts of the evolution with time (in units of
n
MC trials)of the overlaps for
ρ
=
0
.
08, 0.50, 0.65, 0.92 and 1.00 from top to bottom,respectively. Here,
N
=
1600,
β
=
20,
Φ
=−
0
.
4,
M
=
3, and
ρ
c
=
0
.
085.
requires
ρ <ρ
c
,
ρ
c
=
f(Φ,β)
, a condition that makes no sensefor the Hopﬁeld case [19].Fig. 1 summarizes a main result, namely, that chaotic be-havior may occur for
ρ > ρ
c
, and that chaos is then eventuallyinterrupted as one varies, even slightly
ρ
. Fig. 2 illustrates thedifferenttypesofstationarybehaviorthesystemmayexhibitforcorrelated patterns. This shows typical MC runs correspond-ing, from top to bottom, to: (i) stability after convergence inthe neighborhood of one attractor—in fact, its negative—for
ρ < ρ
c
; (ii) fully irregular behavior (with a positive Lyapunovexponent) for
ρ > ρ
c
; (iii) regular oscillation between one at-
A u t h o r ' s p e r s o n a l c o p y
182
J. Marro et al. / Computer Physics Communications 177 (2007) 180–183
Fig. 3. Mean ﬁring rates versus time (bottom graphs) and correspondingphase-space trajectories (top) for indicated values of
ρ
, for three stored pat-terns,
ξ
1
,
ξ
2
and
ξ
3
,
N
=
1600,
Φ
=
0
.
5, and
β
=
167, for which
ρ
c
=
0
.
38.
tractor and its negative for
ρ > ρ
c
; (iv) onset of chaos again as
ρ
is increased; and (v) rapid periodic oscillations between onepattern and its negative when all the nodes are synchronized.(ii) and (iv) are examples of instability-induced switching: theactivity path visits the neighborhood of all the attractors.In order to make more explicit this interesting behavior, weshow in Fig. 3 time series and phase-space trajectories of themean ﬁring rate,
m
=
12
N
i
(
1
+
σ
i
)
, in a system with threestoredpatterns.Inthecase
ρ
=
0
.
15,whichisbelow
ρ
c
,thesys-tem activity only visits one of the patterns; the choice dependson the initial condition. However, for
ρ
=
0
.
433
> ρ
c
in thesecond graph of Fig. 3, the three attractors are visited; the prob-ability of jumping between two speciﬁc attractors depends ontheir mutual correlation. The third graph illustrates how switch-ing tends to become homogeneous—all the stored patterns arevisited with equal probability, and the activity stays the sameamount of time in the neighborhood of each attractor—as
ρ
isincreased, until the system is ﬁnally trapped in a simple cycle,as in the fourth graph of Fig. 3.
3. Conclusion
The attractors stability dramatically depends on both the dis-tribution of weights
w
ij
and the synchronization parameter
ρ
.The latter is relevant only for choices of the connecting weightswhich induce a special susceptibility of the network to exter-nal stimuli. This is implemented in our example by means of fast activity-dependent synaptic ﬂuctuations that induce synap-tic
depression
. Otherwise, e.g., if weights are ﬁxed, even het-erogeneously as in a Hopﬁeld–Hebb network,
ρ
is irrelevant.In our case, there is kind of
dynamic
association, i.e. the neteither goes to one attractor or else, for
ρ
ρ
c
, visits possi-ble attractors. The visits may abruptly become chaotic. Besidessynchronization of a minimum of nodes, this requires carefultuning of
ρ
; a complex situation, as illustrated in Fig. 1, makesit difﬁcult to predict the result for slight changes of
ρ
.
Switching
phenomena, i.e. visiting the attractors, does not require chaos.However, chaotic itinerancy allows for a more efﬁcient searchof the attractors space in a way that was believed to hold in in-teresting cases only under a critical condition [21]. Our modelillustrates a mechanism which may make chaos extremely ben-eﬁcial. The expectation [28–31] that the instability inherent tochaos facilitates moving to any pattern at any time is conﬁrmed.In particular, our model behavior reminds one of some observa-tions concerning the odor response of the (projection) neuronsin the locust antennal lobe [20]. Also interesting is the fact thatthe model exhibits
states of attention
and efﬁcient adaptationto changing environment and, more importantly, classiﬁcationand
family discrimination
. Finally, we mention that studyingthe complex model behavior for
ρ > ρ
c
could be relevant tocontrol chaos in various situations and in determining efﬁcient(parallel) computational strategies, e.g., using
block-dynamics
,
block-sequential
, and associated algorithms [32,33].
Acknowledgements
We thank I. Erchova, P.L. Garrido and H.J. Kappen for veryuseful comments, and ﬁnancial support from FEDER–MECproject FIS2005-00791, JA, and EPSRC–COLAMN projectEP/CO 10841/1.
References
[1] G. Manganaro, et al., Cellular Neural Networks, Springer, Berlin, 1999.[2] J. Marro, R. Dickman, Nonequilibrium Phase Transitions in Lattice Mod-els, Cambridge University Press, Cambridge, 1999.[3] M. Hänggi, G.S. Moschytz, Cellular Neural Networks, Kluwer, Boston,2000.[4] M.E.J. Newman, Proc. Natl. Acad. Sci. USA 98 (2001) 404.[5] D. Garlaschelli, et al., Nature 423 (2003) 165.[6] A. Slavova, Cellular Neural Networks: Dynamics and Modelling, Kluwer,Dordrecht, 2003.[7] M.E.J. Newman, SIAM Rev. 45 (2003) 167.[8] E. Almaas, et al., Nature 427 (2004) 839.[9] L.F. Abbott, W.G. Regehr, Nature 431 (2004) 796.[10] D. Garlaschelli, et al., Physica A 350 (2005) 491.[11] T. Antal, P.L. Krapivsky, Phys. Rev. E 71 (2005) 026103.[12] A. Barrat, R. Pastor-Satorras, Phys. Rev. E 71 (2005) 036127.[13] P.L. Garrido, et al. (Eds.), Modeling Cooperative Behavior in the SocialSciences, AIP Conf. Proc., vol. 779, American Institute of Physics, NY,2005.[14] D. Armbruster, et al., Networks of Interacting Machines, World Sci., Sin-gapore, 2005.[15] A.V. Egorov, et al., Nature 420 (2000) 173.[16] J.M. Cortés, et al., Neural Comp. 18 (2006) 614.[17] J.M. Cortés, et al., Biosystems 87 (2006) 186.[18] J.J. Torres, et al., Neural Comp. (2007), in press.[19] J. Marro, et al., in press.[20] O. Mazor, G. Laurent, Neuron 48 (2005) 661.
A u t h o r ' s p e r s o n a l c o p y
J. Marro et al. / Computer Physics Communications 177 (2007) 180–183
183[21] D.R. Chialvo, Nature Phys. 2 (2006) 301.[22] D.J. Amit, Modeling Brain Function, Cambridge University Press, Cam-bridge, 1989.[23] A.V.M. Herz, C.M. Marcus, Phys. Rev. E 47 (1993) 2155.[24] M.V. Tsodyks, et al., Neural Comp. 10 (1998) 821.[25] A.M. Thomson, et al., Philos. Trans. R. Soc. Lond. B Biol. Sci. 357 (2002)1781.[26] L. Pantic, et al., Neural Comp. 14 (2002) 2903.[27] D. Bibitchkov, et al., Network: Comp. Neural Syst. 13 (2002) 115.[28] W.J. Freeman, Biol. Cybern. 56 (1987) 139.[29] D. Hansel, H. Sompolinsky, Phys. Rev. Lett. 68 (1992) 718; J. Comput.Neurosci. 3 (1996) 7.[30] G. Laurent, et al., Annu. Rev. Neurosci. 24 (2001) 263.[31] P. Ashwin, M. Timme, Nature 436 (2005) 36.[32] F. Martinelli, Lecture Notes in Math. 1717 (2000) 93.[33] D. Randall, P. Tetali, J. Math. Phys. 41 (2000) 1598.

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks