Music & Video

6 pages

Cooperative content retrieval in nomadic sensor networks

of 6
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Cooperative content retrieval in nomadic sensor networks
  Cooperative Content Retrieval in Nomadic SensorNetworks Ioannis Koukoutsidis, Eva Jaho, Ioannis Stavrakakis Dept. Informatics and TelecommunicationsNational & Kapodistrian University of AthensIlissia, 157 84 Athens, GreeceEmail:  { i.koukoutsidis,ejaho,ioannis }  Abstract —A nomadic sensor network consists of: a) sensornodes, that are fixed at some points and collect informationabout states or variables of the environment, and b) mobilenodes that collect and disseminate this information. Mobile nodesusually belong to different classes, and are thus interested indifferent subsets of sensor node information. In such networks,dissemination of information content at smaller costs can beachieved if mobile nodes are cooperative and collect and carryinformation not only in their own interest, but also in the interestof other mobile nodes. A specific modeling scenario is consideredin this paper where the network has the form of a graph; sensornodes are located on the vertices of the graph and U-nodes movealong the edges according to a random waypoint model. Wepresent a game-theoretic analysis to find conditions under whicha cooperative equilibrium can be sustained. I. I NTRODUCTION A  nomadic sensor network   is a recently introduced net-working paradigm [1]. It consists of: a) simple, tiny sensordevices (T-nodes) fixed at some points, whose purpose is tocollect information about states or variables of the environmentand b) more complex mobile devices that are carried byusers (U-nodes), that collect and disseminate this information.Compared to traditional sensor networks where communi-cation to end-users is realized in a multi-hop fashion, thisparadigm exploits user mobility to conserve limited sensorenergy, prolonging the lifetime of the network and makingit more cost-efficient.A U-node can collect sensor data either from the sourceT-node or from an encountered U-node who has previouslyacquired the data. Collecting data from other U-nodes mayincur a smaller access cost (in time or energy), especiallyif the source T-node is far-away. Exploiting mobility in thisway to reduce content retrieval costs requires each U-node toshow some kind of cooperative behavior. However, U-nodesare highly autonomous and intelligent devices; different U-nodes may belong to different classes, and thus be interestedin different sensor node information. Acquiring unwantedinformation incurs a cost in time or energy, thus a U-nodewould be cooperative and collect information of potentialinterest to other U-nodes only in anticipation of the same This work has been supported by the European Commission under EUproject BIONETS (IST-FET-SAC-FP6-027748, and the NoECONTENT (IST-FP6-0384239, U2T1T2T3T4U1 Fig. 1. Network graph behavior by other U-nodes. The purpose of this paper is toidentify requirements and conditions under which cooperativebehavior may emerge in such a network. We present a game-theoretic analysis to find conditions under which a cooperativeequilibrium exists.The general modeling scenario that we study is as follows.We consider a graph model of the network (see Fig. 1).T-nodes are located at the vertices of the graph and U-nodes move randomly along the edges of the graph collectinginformation when they reach a T-node or upon meeting anotherU-node somewhere on the graph. Nodes move according toa Markovian waypoint model on the graph, with a constantspeed  v . Waypoints are set to be the vertices of the graph (T-nodes). No pause times at waypoints are considered. Transi-tions from one waypoint to the next are governed by a Markovchain, i.e. a U-node moves from waypoint  T  i  to waypoint  T  j with probability  p ( T  i ,T  j ) . This is a special case of the so-called “space graph” model in [2].In order to simplify the analysis, instead of studying thenetwork as a whole, we decompose the problem by studyingpossibilities of cooperation on each leg (edge) of the graph.We say that two U-nodes coming from opposite directionsand meeting somewhere on a leg  ( T  i ,T  j )  are cooperative on ( T  i ,T  j ) , if and only if each one copies the content of itssrcin T-node (e.g., in Fig. 1,  U  1  would copy  T  1  and  U  2  wouldcopy  T  2 ), even if they are not interested in it. We shall find  conditions under which the following strategy of each U-noderesults in an equilibrium. This strategy is made up of twoactions:  Initially, a U-node is cooperative and copies unwanted content. However, if it meets a selfish U-node somewhere ona leg, it will only transmit its acquired content with a certain probability.  This strategy may easily be applied, provided thatupon meeting each other, U-nodes exchange messages thatcontain the list of information objects stored in their memory.The equilibrium conditions depend mainly on the probabil-ity of meeting other U-nodes on the leg. Under the setting wediscuss in this paper, satisfying equilibrium conditions for eachleg and between each pair of U-nodes means that a cooperativeequilibrium is achieved for the whole network. Note that inmaking this decomposition, we are implicitly assuming thatU-nodes base their decision only on their interest for contentthat is in distance of one leg. The analysis becomes morecomplicated when this “decision horizon” is more than one,and is not attempted here. (For example in Fig. 1, if node U  1  that starts from  T  1  is interested in both  T  2  and  T  3  and isabout to follow the path  { T  1 ,T  2 ,T  3 } , it should consider theprobability of meeting U-nodes that previously acquired datafrom  T  2  or  T  3 , on all segments of this path.)On what concerns the knowledge each U-node has, thefollowing assumptions are made. Each U-node knows thetopology of the network, the number of other U-nodes and thedistances between each pair of T-nodes. They are also awareof the information content at each T-node. Furthermore, U-nodes know that other U-nodes move also according to therandom waypoint model on the graph. On the other hand,the (instantaneous) rate at which the information content oneach T-node is updated is a function of time unknown to theU-nodes. (Thus a U-node may return to an already visited T-node to get updated information.) In addition, each U-nodedoes not know the interests of other U-nodes for sensor dataand has no memory of previous encounters with them.Several hypotheses that facilitate the analysis are also made.First it is assumed that U-nodes are homogeneous devices,have the same processing and communication costs and havethe same movement parameters. Secondly, U-nodes have in-finite (in practice this means sufficiently large) storage space.Thus they do not have to consider possible replacement poli-cies (for example, throwing away their less valuable content)and respective costs. Thirdly, that each T-node generates asingle type of information content, and information contentis not depreciated in time or space. If it was depreciated, aU-node should have to include in its decision the (possiblysubjective) value of each information object at the moment of its acquisition. Examples of non-depreciated content includestatistical samples (e.g., measurements of physical quantities).On the other hand, information with limited temporal orspatial scope, or software modules that are subject to updatesare considered to be depreciating in time or space. We alsoconsider that U-nodes make decisions for their best interest,but are not malicious, i.e., they don’t perform actions fromwhich they have no material gain, only to hurt others. Finally,we assume that data exchanged between U-nodes or betweena U and T-node consist only of a few bits; thus they aretransmitted within an infinitesimal time interval, which is notconsidered in the analysis.II. A NALYTICAL  M ODEL In the analysis that follows, we consider a number  N   of U-nodes, and calculate the expected cost for a U-node to followa certain strategy on an arbitrary leg  ( T  i ,T  j ) . Because of thesymmetry of our model, the cost is the same for all U-nodes.A strategy consists of a sequence of actions, concerning thedecisions to collect, carry and transmit information that is notof interest to a U-node. Actions are either of a cooperative orselfish nature. A strategy is itself called cooperative if all theactions it is composed of are cooperative.We consider that the game is played between a U-nodestarting from the srcin sensor  T  i  and  N   −  1  other U-nodesit may meet before reaching the destination sensor  T  j . Ourworking hypothesis is that a U-node is not interested inthe content of the origin sensor, but only in that of thedestination. A priori, it assumes the same for the other U-nodes. This will produce conditions for cooperation in a worst-case scenario, since otherwise if a U-node is also interestedin the content of the srcin sensor, it has greater incentive tocooperate. The analogous assumption for the other U-nodescan be partially justified by the lack of any information aboutthe identities of the encountered nodes and their interests forsensor information.Costs can be expressed in time or energy units. Here weinterpret this cost as the delay to retrieve information content.The length of a leg  ( T  i ,T  j )  is denoted by  d ( T  i ,T  j ) . Thecommunication and processing cost of a U-node to acquirecontent from a T-node and then transmit it to a U-node isa constant  c , translated in time units. Consistently with ourassumption of infinite storage, we do not consider any costfor carrying the information. Finally, not to complicate theanalysis, the transmission ranges of both U-nodes and T-nodesare set to be zero.Consider the process  X  ( t )  of the position of the U-nodeon the graph at each time instant  t . We find its stationarydistribution as follows. First consider the embedded Markovchain of waypoints visited sequentially by a U-node. Under thestationary distribution of this chain, the fraction of transitionsto waypoint  T  i  is denoted by  π i . Then if we have constantvelocity, the probability that the U-node is on any segment of length  x  on leg ( T  i ,  T  j ) in direction from  T  i  to  T  j  is π i  p ( T  i ,T  j ) x  T  i  T  j  = T  i π i  p ( T  i ,T  j ) d ( T  i ,T  j )  .  (1)That is, it is the fraction of time the U-node spends on thesegment of length  x  while moving from  T  i  to  T  j .Suppose that the U-node, hereafter called  U  i , is at waypoint T  i  at  t  = 0  and decides to go towards waypoint  T  j . Confine thestrategy of each player to take values in the set  S   =  { C,S  }  ( C  :cooperative,  S  : selfish). By choosing strategy  C  ,  U  i  copies,carries and transmits the content of   T  i  to an encountered U-node that is interested in it, whereas by following strategy  S   it ignores it. In order to calculate the expected cost by follow-ing a certain strategy,  U  i  should estimate the probability of meeting another cooperative U-node coming from  T  j  towards T  i , at a certain distance  x  from  T  i . Suppose there are  k  other ( k < N  )  cooperative U-nodes. To meet another U-node withina distance  x , the latter must be at a distance of at most  2 x  at t  = 0  and be headed towards  T  i . Since the move processesof the U-nodes are mutually independent as well as jointlystationary, the instant  t  = 0  is an arbitrary instant at which U  i  at  T  i  observes the position of the other U-nodes. Thereforeit observes the other U-nodes in their stationary distribution.Consequently, if   d ( T  i ,T  j )  ≥  2 x  the probability of a meetingwith at least one cooperative U-node within a distance  x  is F  k ( x ) = 1 − (1 −  π j  p ( T  j ,T  i )2 x  T  i  T  j  = T  i π i  p ( T  i ,T  j ) d ( T  i ,T  j )) k . (2)If   d ( T  i ,T  j )  <  2 x , we must also include the event that anotherU-node is on a leg or direction different from  ( T  j ,T  i )  at  t  =0  but can meet with  U  i  in time less than  x/v . Consider allsuch legs  m  and the starting points  T  m  of the U-nodes onthese legs. Let  { T  m ,...,T  j }  denote all the paths that maybe followed to reach  T  j  before the meeting, and for which  p ( T  m , · ) ···  p ( · ,T  j )  >  0 , where  ·  denote intermediate states.Then this meeting probability equals 1 − (1 −  m π m  p ( T  m , · ) ···  p ( · ,T  j )  p ( T  j ,T  i )2 x  T  i  T  j  = T  i π i  p ( T  i ,T  j ) d ( T  i ,T  j ) ) k . The summation in the numerator is over all possible pathsthat can be followed. This probability again equals (2), since  k  π k  p ( T  k , · ) ···  p ( · ,T  j ) =  π j  from the balance equations inthe embedded Markov chain.Hence the distribution function  F  k ( x )  gives the probabilitythat a meeting with another cooperative U-node takes place atdistance  ≤  x , for any  x  such that  0  < x < d ( T  i ,T  j ) .Suppose that there are only two nodes in the network,  U  i and  U  j , and that  U  i  meets with  U  j  at distance  x  from  T  i  ( x <d ( T  i ,T  j ) ). If they follow strategies  s i ,  s j  respectively ( s i ,s j  ∈S  ), then the cost of   U  i , denoted as a function  C  ( T  i ,T  j ) i  ( s i ,s j ,x ) of   U  i , where  C  ( T  i ,T  j ) i  :  S ×S × [0 ,d ( T  i ,T  j )]  →  R , is C  ( T  i ,T  j ) i  ( C,C,x ) =  c + x/v C  ( T  i ,T  j ) i  ( S,C,x ) =  x/v C  ( T  i ,T  j ) i  ( C,S,x ) =  c + d ( T  i ,T  j ) /v C  ( T  i ,T  j ) i  ( S,S,x ) =  d ( T  i ,T  j ) /v . (3)We denote by  α ( T  j ,T  i )  the probability that  U  i  meets witha  specific  other U-node before meeting  T  j , i.e.: α ( T  j ,T  i )   2 π j  p ( T  j ,T  i ) d ( T  i ,T  j )  T  i  T  j  = T  i π i  p ( T  i ,T  j ) d ( T  i ,T  j )  .  (4)As it can be seen, this meeting probability increases withthe length of a leg. Additionally,  U  i  has a higher meetingprobability for greater  π j  p ( T  j ,T  i ) , that is if the destinationnode  T  j  is more frequently visited, or if U-nodes at  T  j  havean increased probability of heading towards  T  i .We will derive the expected cost of   U  i  to take action  s i ,when  k  other U-nodes are cooperative ( 0  ≤  k  ≤  N   − 1 ). Wedenote this as  C  ( T  i ,T  j ) i  ( s i | k ) . For  1  ≤  k  ≤  N   −  1 , we havethat C  ( T  i ,T  j ) i  ( s i | k ) =    d ( T  i ,T  j )0 C  ( T  i ,T  j ) i  ( s i ,C,x ) dF  k ( x )+ (1 − α ( T  j ,T  i )) k C  ( T  i ,T  j ) i  ( s i ,C,d ( T  i ,T  j ))  . (5)We arrive at the following expressions for different combina-tions of followed strategies: C  ( T  i ,T  j ) i  ( C  | k ) =  c +  d ( T  i ,T  j ) v 1 − (1 − α ( T  j ,T  i )) k ( k  + 1) α ( T  j ,T  i ) C  ( T  i ,T  j ) i  ( S  | k ) =  d ( T  i ,T  j ) v 1 − (1 − α ( T  j ,T  i )) k ( k  + 1) α ( T  j ,T  i ) C  ( T  i ,T  j ) i  ( C  | 0) =  c + d ( T  i ,T  j ) /v C  ( T  i ,T  j ) i  ( S  | 0) =  d ( T  i ,T  j ) /v . (6)III. G AME -T HEORETIC  A NALYSIS We observe from (6) that  C  ( T  i ,T  j ) i  ( C  | k )  > C  ( T  i ,T  j ) i  ( S  | k )  ∀ i,k . Furthermore it can be shown that C  ( T  i ,T  j ) i  ( C  | k ) ,  C  ( T  i ,T  j ) i  ( S  | k )  are decreasing functions of   k .The game is a Bayesian analog of the N-person prisoner’sdilemma (see [3]). (Since  U  i  does not know the number oridentity of other U-nodes it may meet on  ( T  i ,T  j ) .) Whenseen as a noncooperative game, it is evident from the aboveexpressions that there exists only one equilibrium, in whichevery node is selfish. (Since  S   strongly dominates  C   for everyplayer.) However, it may not be the best solution; player  U  i can have a benefit by cooperating on  ( T  i ,T  j ) , when  k  otherU-nodes are cooperative, if  C  ( T  i ,T  j ) i  ( C  | k )  <  C  ( T  i ,T  j ) i  ( S  | 0)  .  (7)That is, if the cost for  U  i  when  k  other U-nodes are cooperativeis smaller than its cost when all U-nodes are selfish. Thiscondition also complies with individual rationality of   U  i , since C  ( T  i ,T  j ) i  ( S  | 0)  is the minimax value  U  i  can guarantee for itself.From (6), this inequality is satisfied if  1 −  1 − (1 − α ( T  j ,T  i )) k ( k  + 1) α ( T  j ,T  i )  > cvd ( T  i ,T  j )  .  (8)It must always hold that  c <  d ( T  i ,T  j ) v  ; that is, our initialassumption must be that the cost (expressed in time units)for a U-node to acquire and transmit unwanted content mustbe smaller than the time to reach  T  j  starting from  T  i .We find an approximate condition for the above inequalityto be satisfied. The Taylor polynomial of   (1 − α ) k at  α  = 0  is,up to a second order approximation,  1 − kα + k ( k − 1) α 2 2  ( k >  1 ).Substituting this approximate expression in inequality (8), we   0 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1    T   h  r  e  s   h  o   l   d  v  a   l  u  e  o   f   k α d/v=10d/v=50d/v=100 (a)  c  = 3  0 20 40 60 80 100 120 0 0.2 0.4 0.6 0.8 1    T   h  r  e  s   h  o   l   d  v  a   l  u  e  o   f   k α c=3c=10c=30 (b)  d/v  = 50 Fig. 2. Approximate threshold values of the number of other cooperativenodes  k  above which cooperation on a directed leg is better for  U  i  than fullselfishness, for different values of   α  and  d/v ,  c . get the condition k >  2 cvd ( T  i ,T  j ) α ( T  j ,T  i )  −  2(1 +  α ( T  j ,T  i )) α ( T  j ,T  i )( k  + 1) + 2  , which is satisfied when k >  2 cvd ( T  i ,T  j ) α ( T  j ,T  i ) + 2  .  (9)Based on (9), in Fig. 2 we show approximate threshold valuesof the number of other cooperative nodes  k  above whichcooperation for  U  i  on  ( T  i ,T  j )  is more beneficial than the casewhen all U-nodes are selfish.These confirm that from the point of view of   U  i , a smallernumber of cooperative nodes is required on long-distanceroutes or when the destination node  T  j  is visited more often.On the other hand, a higher number is required when U-nodesare moving at a high speed or if the cost  c  is higher.The situation in which each U-node is cooperative onboth directions of a leg  ( T  i ,T  j )  will be identified as “fullcooperation” on  ( T  i ,T  j ) . The inverse situation in which eachU-node is selfish, will be called “full selfishness”. When fullcooperation is achieved as a strategic equilibrium in a leg of the graph, then we will say that a cooperative equilibriumexists on this leg. If this can be achieved on all legs, then acooperative equilibrium exists in the whole network. We nextproceed to find a strategy of each U-node and conditions underwhich a cooperative equilibrium can be achieved.First pay attention to the fact that even is full cooperationis beneficial for U-nodes in the network, this is not enough tosustain an equilibrium. For an equilibrium to exist, there musteither be some punishment to selfish nodes or some form of contract, in which all U-nodes would agree to be cooperative,threatening to be selfish if such contract is not signed byeveryone [4]. Given that the arrival to such an agreement isdifficult in an unstructured network with autonomous nodes,we consider the following scheme that is easily applicable.A requirement of the scheme is that U-nodes, upon meetingeach other, first exchange the lists of information objects intheir memory. These lists contain metadata regarding the typeof information and the source T-node. Then each U-nodeexecutes this strategy: initially it is generous and collects andcarries unwanted content; however if on a certain leg of thenetwork it meets a selfish U-node, it will only transmit thiscontent with a probability  p , called the cooperation probability(  p <  1 ). (If a U-node does not communicate its list of objects,it can be considered selfish and the same strategy applies.)We proceed to write a condition under which this strategyis preferable for  U  i  on  ( T  i ,T  j ) , and thus can lead to anequilibrium. Given that there are  N  − 1 k   different combinationsof U-nodes where exactly  k  other U-nodes are cooperative, thiscondition is C  ( T  i ,T  j ) i  ( C  | N  − 1)  ≤ N  − 1  k =0  p k (1 −  p ) N  − k − 1  N   −  1 k  C  ( T  i ,T  j ) i  ( S  | k )  . (10)That is, the expected cost for  U  i  when all U-nodes arecooperative must be smaller or equal to the expected costwhen  U  i  is selfish and  k  other U-nodes are cooperative withprobability  p ,  k  = 0 ,...,N   −  1 . It is evident that (7) isa special case of this condition, where  p  = 0  and  k  = 0 (admitting  0 0 = 1 ).Substituting from (6), we have that (in the following weomit the parameters in  d ( · ) ,  α ( · )  for notational convenience) c + dv 1  −  (1  −  α ) N  − 1 Nα  ≤  dv  (1  −  p ) N  − 1 + N  − 1  k =1  p k (1  −  p ) N  − 1 − k  N   −  1 k  1  −  (1  −  α ) k ( k  + 1) α  (11)The right-hand side (rhs) of this inequality becomes dv (1  −  p ) N  − 1  1 + 1 a  N  − 1  k =1   p 1  −  p  k  N   −  1 k   1 k  + 1 − N  − 1  k =1   p (1  −  α )1  −  p  k  N   −  1 k   . It can be derived that  N  − 1 k =1   p 1 −  p  k   N  − 1 k   1 k +1  =  N  − 1 k =1   p 1 −  p  k 1 N    N k +1   =  1 Np (1 −  p ) N  − 1  −  1 −  pNp  −  1  and  N  − 1 k =1   p (1 − α )1 −  p  k   N  − 1 k   =  1 − αp 1 −  p  N  − 1 −  1 . Therefore (11)   0 2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20 Ν  p =0.2f lhs (N)=Nf rhs (N,a=0.3,c=5,d/v=100)f rhs (N,a=0.3,c=5,d/v=10)f rhs (N,a=0.4,c=5,d/v=100)f rhs (N,a=0.3,c=20,d/v=100) (a)  2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20 Ν  p =0.3f lhs (N)=Nf rhs (N,a=0.3,c=5,d/v=100)f rhs (N,a=0.3,c=5,d/v=10)f rhs (N,a=0.4,c=5,d/v=100)f rhs (N,a=0.3,c=20,d/v=100) (b)  2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20 Ν  p =0.5f lhs (N)=Nf rhs (N,a=0.3,c=5,d/v=100)f rhs (N,a=0.3,c=5,d/v=10)f rhs (N,a=0.4,c=5,d/v=100)f rhs (N,a=0.3,c=20,d/v=100) (c)  2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20 Ν  p =0.9f lhs (N)=Nf rhs (N,a=0.3,c=5,d/v=100)f rhs (N,a=0.3,c=5,d/v=10)f rhs (N,a=0.4,c=5,d/v=100)f rhs (N,a=0.3,c=20,d/v=100) (d)Fig. 3. Graphical illustration of cooperative equilibrium conditions, for different values of the cooperation probability  p  and model parameters: the crossoverpoints of   f  rhs  with the diagonal  N   show approximate threshold values of   N   above which a U-node is cooperative in a directed leg in our model. becomes c + dv 1 − (1 − α ) N  − 1 Nα  ≤ dv (1 −  p ) N  − 1  1+ 1 α   1 Np (1 −  p ) N  − 1  − 1 −  pNp  −  1 − αp 1 −  p  N  − 1   . Applying the second order Taylor polynomial approximationto  (1 − α ) N  at  α  = 0 , we finally obtain the condition N   ≥ 1 + 2 α  + 2(1 − α ) α   1 αN   − (1 −  p ) N  − 1 − 1 aNp  1 − (1 −  p ) N   + (1 − αp ) N  − 1 +  cvd   . (12)In Fig. 3 we show graphically the required number of nodes in the network that would satisfy this condition, fordifferent values of the meeting probability  α  and  d/v . (Wedraw the rhs of (12), called  f  rhs  and the diagonal  N  , called f  lhs .) It can be deduced from the graphs that a cooperativeequilibrium on a leg is easier to achieve (i.e., we need asmaller number of U-nodes) for smaller values of   c  and largervalues of   d/v . The behavior with respect to  α  is less intuitive:when the cooperation probability of other U-nodes is small, ahigher meeting probability leads a U-node actually having lessincentive to cooperate and appearing more selfish, whereas if the cooperation probability of other U-nodes is high, it leads aU-node to actually be cooperative. Finally, as the probabilityof cooperation decreases, a smaller number  N   of nodes in thenetwork is required for an equilibrium to be achieved. We canexpect this result, since a cooperative U-node “punishes” morea selfish U-node by giving its acquired content with a smallerprobability. Therefore U-nodes refrain from being selfish.Note that in the model we develop in this paper we donot have to consider a “stricter” condition for a cooperativeequilibrium to exist (i.e., a condition that would call for ahigher  N  ). This is because we have assumed in the beginningthat each U-node thinks, a priori, that all the other U-nodes areinterested in the content it collects; thus we have excluded thecase where a U-node would not collect unwanted informationbecause it might think that other U-nodes would also notbe interested in it (and hence also might not collect data attheir respective srcin points). Therefore if such cooperativeequilibrium conditions are satisfied in all directed legs simulta-neously, a cooperative equilibrium exists in the whole network.IV. R ELATED  W ORKS Previous applications of game-theoretic methods in exam-ining cooperation between mobile nodes have focused mainlyon traditional ad-hoc networks, where cooperation consists of 
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!