School Work

12 pages
4 views

Packet caches on routers: the implications of universal redundant traffic elimination

Please download to get full document.

View again

of 12
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Share
Description
Packet caches on routers: the implications of universal redundant traffic elimination
Transcript
  Packet Caches on Routers: The Implications ofUniversal Redundant Traffic Elimination AshokAnand ∗ ,ArchitGupta ∗ ,AdityaAkella ∗ ,SrinivasanSeshan † andScottShenker ‡∗ UW-Madison,  † CMU,  ‡ UC-Berkeley {ashok,archit,akella}@cs.wisc.edu,srini@cs.cmu.edu,shenker@cs.berkeley.edu ABSTRACT Many past systems have explored how to eliminate redundant trans-fers from network links and improve network efficiency. Several of these systems operate at the application layer, while the more recentsystems operate on individual packets. A common aspect of thesesystems is that they apply to localized settings, e.g. at stub network access links. In this paper, we explore the benefits of deploying  packet-level redundant content elimination as a universal primitive on all Internet routers. Such a universal deployment would imme-diately reduce link loads everywhere. However, we argue that farmore significant network-wide benefits can be derived by redesign-ing network routing protocols to leverage the universal deployment.We develop “redundancy-aware” intra- and inter-domain routing al-gorithms andshow thattheyenable bettertrafficengineering, reducelink usage costs, and enhance ISPs’ responsiveness to traffic varia-tions. In particular, employing redundancy elimination approachesacross redundancy-aware routes can lower intra and inter-domainlink loads by 10-50%. We also address key challenges that may hin-der implementation of redundancy elimination on fast routers. Ourcurrent software router implementation can run at OC48 speeds. Categories and Subject Descriptors:  C.2.2 [Computer Communi-cation Networks]: Routing Protocols General Terms:  Algorithms, Design, Measurement. Keywords:  Traffic Redundancy, Routing, Traffic Engineering. 1. INTRODUCTION The basic property that some of the content on the Internet ishighly popular results some data being repeatedly transferred acrossthe network. A number of existing systems attempt to improve theefficiency of the network by eliminating these redundant transfers.There is wide-spread agreement that these approaches offer signifi-cant benefits in practice.A common property of existing systems is that they typically op-erate in a localized fashion by eliminating the redundant transferseither on the link, or of the application, directly connected to thesystem. The goal of thispaper is toexplore some of the  implications of   network-wide  deployment of redundancy elimination technology. Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee. SIGCOMM’08,  August 17–22, 2008, Seattle, Washington, USA.Copyright 2008 ACM 978-1-60558-175-0/08/08 ...$5.00. Amajorityof theredundancy-elimination systemshaveemployedapplication-layer object caching to eliminate redundant data trans-fers. For example, Web proxy caches are an application-layer ap-proach to reduce the bandwidth usage of static Web content withinISPsand at large stub networks. Numerous studies [25, 11] have ex- plored the effectiveness of such designs. In recent years, a numberof systems, both commercial [3, 4, 1, 2] and non-commercial [24],have been developed which operate  below  the application layer andattempt to eliminate  any  redundant strings of bytes that appear onthe network. Such systems enjoy two benefits. First, they are nottied to a single application and can eliminate all forms of redundantinformation. Second, past evaluations have shown that more redun-dant information can be removed by focusing at the packet and bytelevels than at the object level.In this paper, we consider the  benefits  of deploying of   packet-level redundant content elimination as a primitive IP-layer service across the entire Internet. We start with the assumption that all fu-ture routers will have the ability to strip redundant content fromnetwork packets on the fly, by comparing packet contents againstthose recently-forwarded packets which are stored in a cache. Rou-tersimmediately downstream can reconstruct full packets from theirown cache. Applying this technology at every link would provideimmediate performance benefits by reducing the overall load on thenetwork. It also enables new functionality: for example, it simpli-fies application layer multicast by eliminating the need to be carefulabout duplicate transmissions.However, universal redundancy eliminationcanyieldevengreaterbenefits if existing protocols are  redesigned with redundancy elimi-nation in mind  . In this paper, we describe how wide-spread deploy-ment of redundancy elimination can be leveraged by ISPs to changethe way they compute routes giving rise to new and improved tech-niques for managing network resources. We analyze the benefitsof selecting routes which maximize the opportunity to eliminate re-dundant content, versus routes which minimize hop count or othercost functions; An example is shown in Figure 1.Weconsidersuch“redundancy-aware” modificationstobothintra-and inter-domain routing. In our proposed approaches, ISPs firstcompute estimates of how often content is replicated across diffe-rent destinations—wecalltheseestimates redundancy profiles —anduse these estimates in computing forwarding paths for their pack-ets. We describe how ISP routers can compute redundancy profilesin parallel with forwarding packets. We also describe how ISPscan leverage centralized route control platforms (e.g. 4d [13] orRCP [8]) to compute network-wide redundancy-aware routes in ascalable and efficient fashion. In contrast with current state-of-the-art practices, our redundancy-aware approaches can allow ISPs bet-ter control over link loads, and offer them greater flexibility inmeet-ing traffic engineering goals and in reacting to sudden traffic surges.  Figure 1: In (a), we show shortest path routing where the net-work carriers 18 packets in all). In (b), redundant packet elim-ination is employed on the shortest paths resulting in 12 pack-ets total, or a 33% reduction. In (c), we use redundancy-awareroutes which minimize the total number of packets carried bythe network resulting in 10 packets total, or a 44% reduction. We have evaluated the full range benefits arising from a uni-versal deployment of redundancy elimination, and from using ourredundancy-aware route selection algorithms. Our evaluations useRocketfuel topologies and workloads from full packet traces col-lected at a large US university as well as synthetic traffic modelsderived from the traces. When traditional shortest path routing isused, we find that applying redundancy elimination on all network links brings down the network-wide utilization by 10-50%. In con-trast, when redundancy-aware routing is employed, we find that thenetwork-wide utilization is reduced by a  further   10-25%. We alsostudy the effect of staleness of redundancy profiles on route qual-ity. We find that the benefits from redundancy-aware routing aresignificant even when traffic patterns change unexpectedly and theroute computation is unable to react to the change (as might happenduring flash-crowd events). Overall, we find that a wide-spread de-ployment of redundancy elimination can be leveraged to obtain verysignificant network-wide benefits. These benefits can quickly thanoffset the initial high cost of deployment.We also consider some key challenges that may hinder the de-ployment of packet-level redundancy elimination in today’s high-speed routers. Starting from the algorithm in [24], we make keyenhancements to the design of packet caches and to cache lookupalgorithms in order to reduce both the total amount of storage re-quired and the number of memory accesses incurred per packet. Wehave implemented these improvements in Click  [18]. Our simplis-tic implementation offers a throughput of 1Gbps in software on a1.8GHz Linux box. We argue that, with better hardware support,the throughput of the software implementation can easily exceed2.5Gbps. Even higher throughputs can be attained in hardware.This paper is structured as follows. In Section 2, we discuss aprior approach for packet-level redundancy elimination and outlinethe issues we consider in this paper. In Sections 3 and 4, we presentredundancy-aware intra- and inter-domain routing, respectively. InSection 5, we present a measurement study of key properties of redundant content observed in real packet traces. In Section 6,we evaluate the benefits of universal redundancy elimination andredundancy-aware routing. In Section 7, we present our softwarerouter implementation of packet-level redundancy elimination. Wediscuss related work in Section 8 and conclude in Section 9. 2. BACKGROUND In this section, we present a brief outline of a popular mechanismfor packet-level redundancy elimination, and review current prac-tices in routing and traffic engineering. We then discuss the chal-lenges in updating routing to leverage a universal deployment of redundancy elimination. We end with a preliminary empirical studywhich points to the likely benefits of modifying routing in this way. Figure 2: Packet-level redundancy detection. 2.1 Algorithm for Redundancy Elimination We describe a fast algorithm for identifying chunks of redundantcontent across packets. This algorithm has been employed in var-ious forms in the past, e.g., for detecting duplicates in a file sys-tem [16, 19] and for detecting worms [22]. The algorithm we dis- cuss here was first conceived by Spring et. al [24] who applied it toremove redundant content from access links. Their approach oper-ates at access routers, as packets enter or leave a stub network.For every packet going in a particular direction, the algorithmcomputes a set of   fingerprints  by applying a hash function to each64 byte sub-string of the packet’s payload. This choice of sub-stringsize offers good performance in practice [24]. Thus, for an  S  -bytepacket ( S   ≥ 64 ), a total of   S  − 63  fingerprints are obtained. Ratherthan use an MD5 hash, the algorithm uses a sliding hash functioncalledRabinfingerprint [20], which significantlycutsdown thehashcomputation time per packet [24]. A subset of these fingerprints areselected at random per packet as its  representative fingerprints .Representative fingerprints of all the packets observed over somepast interval of time are stored in a  fingerprint store  at the router.The packet payloads are stored in a  packet store . Pointers are storedfrom each fingerprint to the corresponding packet (Figure 2).Aftercomputing representativefingerprintsforanarrivingpacket,each fingerprint is checked against the fingerprint store to check if the fingerprint already exists there. If a match is found, this meansthat the incoming packet has a 64 byte sub-string that matches withan in-cache packet. The matching packet is retrieved and the 64Bmatch region is expanded left and right to obtain the region of over-lap between the two packets. The new packet is inserted into thepacket store, and the mapping in the fingerprint store is updatedso that the matching fingerprint points to the new packet. The re-maining representative fingerprints of the arriving packet are alsoinserted into the fingerprint store. In some situations, more thanone representative fingerprint of the incoming packet can observe amatch; this means that different regions of the arriving packet havematched with distinct regions of one or more cached packets.Each match region is removed from the incoming packet andreplaced with a  shim  which  encodes  the redundant content in thepacket. The shim provides the fingerprint which caused the match,and byte range for the matching in-cache packet. The shim can beused by a downstream router to reconstruct the packet from its ownlocal cache. It is assumed that that the cache on the downstreamrouter is consistent with the upstream cache. 2.2 Intra and Inter-domain Routing ISPsmake intra-domain  routingdecisionsonthebasisof apacket’sdestination IP address. Since selecting static routes per destina-tion (e.g., always using paths with small hop counts) could impacttheir ability to control the load on network links, many ISPs em-ploy traffic engineering (TE) techniques. These involve estimat-ing expected volume of traffic between different locations (PoPs) ina network  [17] and computing routes so as to spread load evenlyacross network links. Although ISPs are known to overprovision  their links, traffic engineering is crucial to manage resources in theface of traffic variations.When  selecting inter-domain  routes, ISPs attempt both to reduceusage costs of expensive inter-domain links and to minimize theimpact of inter-domain trafficon intra-domain links. Typically, ISPsstatically select the most cost-effective AS as the next hop for adestination. Packets are sent to the next hop either along the early-exit route or, in some cases, along an exit point that is chosen basedon mutual agreements with the neighbor.Meeting network-wide traffic engineering objectives effectivelyisvery challenging today (inpart because of thedifficultyinpredict-ing traffic volumes accurately). In particular, current intra-domainrouting and TE practices cannot easily adjust to variations in trafficvolumes. The variations could cause the load on intra-domain linksto increase beyond acceptable limits. Similarly, the load on expen-sive and congested inter-domain links can increase significantly aswell. In both cases, this could lead to a violation of TE objectivesand of service level agreements (SLAs) with neighboring networks.Applying redundancy elimination on network links improves theeffective utilization of the links and provides ISPs greater flexibilityin meeting their network-wide objectives. The flexibility is furtherenhanced when routingand trafficengineering aremodifiedto lever-age link-level redundancy elimination. 2.3 Toward Redundancy-Aware Routing We assume that redundancy elimination approaches such as theone described in Section 2.1 are applied on input and output portof Internet routers in a hop-by-hop manner. An  upstream  router removes  redundant content as a packet  leaves  it, while the routerimmediately  downstream reconstructs  the packet as soon as it  ar-rives (assuming the packet hasredundant content). Allrouters cachepackets that they have forwarded or received over a fixed short in-terval of time (e.g., 10s). Upstream and downstream packet cachesshould be the same size and the routers must employ the same hashfunctions and random seeds to ensure consistency.To leverage a universal deployment of redundancy eliminationand improve network-wide utilization, ISPs must change the wayroutes are computed today, as well as how routers act on packets.In particular, ISPs must perform three key tasks: (1) ISPs mustfirst track how packet content is replicated across different pointsin their network; We call this information the “Traffic RedundancyProfile”; (2) Based on thenetwork-wide profile, ISPsmust then con-struct intra and inter-domain forwarding paths which maximize thelikelihood of duplicate data traversing the same network links and,at the same time, allow ISPs to meet their network-wide objectives;We call this “Redundancy-Aware Route Computation”. And, (3)Router-level redundancy elimination techniques must operate onev-ery packet at every router along network paths.Our goal is to systematically understand how ISPs may imple-ment these tasks, and show that ISPs can obtain  significant benefits in terms of controlling the loads on their links, being able to meettheir TE objectives satisfactorily, being better prepared for suddentraffic variations, and reducing usage costs and congestion on inter-domain links. Next, we discuss initial measurements which point tothe potential benefits of employing redundancy-aware routes. Preliminary Study.  We conducted a preliminary measurementstudy where we tracked packets srcinating from a high volume /24prefix owned by a large US university (the packets are headed forthe commercial Internet). Traffic from the university enters its pri-mary ISP at Chicago. We analyzed this traffic using the algorithmin Section 2.1 and found that 45% of the packet contents were du-plicated for a 150s traffic snapshot using a packet store that couldhold all packets in the snapshot; that is, the ratio of the total sizeof the matched regions in all packets to the total size of all packetswas 0.45. Further, we dissected the /24 traffic leaving the primaryISP’snetwork fromitsNew York, Washington DCandBostonPoPs.About 22% of the packet contents leaving New York were dupli-cated in the 150s snapshot. The fraction was 18% for DC and 2%for Boston. Also, the shortest paths from Chicago (close to wherethe university islocated) tothese citieswerenon-overlapping. Thus,simply employing redundancy elimination techniques in a hop-by-hop manner can yield savings of 2-22% (when only considering thebytes due to the /24) on the intra-domain links of the primary ISP.Interestingly, 10% and 9% of the contents of packets going toNew York also appeared in packets going to Boston and Washing-ton DC. Thus, if packets to Boston and Washington DC are routedvia New York (this does not cause significant path inflation) andthen redundancy elimination applied, the overall utilization of thenetwork links can be brought down even further. 3. INTRA-DOMAIN ROUTING In this section, we present our redundancy-aware intra-domainrouting approach which can help ISPs manage link loads more ef-fectively and reduce overall network utilization. As mentioned ear-lier, the ISPgathers information on how content is duplicated withinits network over a certain interval of time, and construct routeswhich maximize the potential for redundancy elimination. We as-sume that all ISP routers employ redundancy elimination.To begin with, we assume that the ISP has perfect and timelyknowledge of the prevailing patterns of redundancy in its network and that it can configure redundancy-aware paths within the net-work in a negligible amount of time. We also assume that packetsare duplicated in full, if at all. We start with a simple setting wherewe consider packets srcinate from a single PoP in an ISP. We ex-tend this to a network-wide setting and construct redundancy-awareintra-domain routes between all pairs of PoPs in the ISP.Following this, we discuss practical issues in redundancy-awareintra-domain routing, such as fast approaches for estimating theredundancy patterns, accounting for partial replication of contentacross packets, and computing redundancy-aware routes. 3.1 A Single PoP Weuse the following abstract model todevelop our approach. Werepresent the ISP using a graph  G  = ( V,E  ) . We focus on trafficsrcinating from a single ISP PoP, denoted by  S   ( ∈  V   ). We referto  S   as the  source  or  ingress . Nodes  D 1 ,D 2 ,...,D m  ∈ V    denotethe  egress PoPs  or  destinations  through which traffic srcinating at S   exits the ISP. Other vertices in  V    represent ISP backbone routers.We now model duplicated packets within the traffic srcinatingfrom  S  . Suppose that  N   distinct packets  { P  1 ,P  2 ,...,P  N  } srci-nate at S   over a certaintimeduration  T  . Allother packet srcinatingat  S   in this interval are duplicates of the  N   distinct packets. Eachdistinct packet  P  i  can have one or more “copies”. We use the term“copy” in a loose sense: specifically, we consider the srcinal dis-tinct packet to be the first copy. Some copies of the distinct packet P  i  may all be destined for the same destination  D j , while othercopies may be headed for other destinations.Weassume that theISP has all information regarding destinationsof the different packet copies. Specifically, the ISP has a list of constants  cpy i,j  defined so that  cpy i,j  = 1  if a copy of distinctpacket  P  i  is destined for egress  D j . For instance, say that distinctpacket  P  1  srcinating at  S   has four copies overall, two of which aredestined for PoP D 1  and one each for PoPs D 3 ,D 5 . Then, cpy 1 , 1  = cpy 1 , 3  =  cpy 1 , 5  = 1 , and  cpy 1 ,j  = 0  for all other destinations  D j .We call this list of   cpy ’s the  redundancy profile  for the trafficsrcinating from  S   in the time interval  T  . In practice, an ISP can  compute the profiles as packets enter its network (Section 3.3.2).Next, we show how the ISP can use the redundancy profile to com-pute redundancy-aware routes from  S   to the different  D j s. We firstdefine a few variables which we employ in explaining our approach.We refer to the traffic going from  S   to a particular destinationwithin the time interval  T   as a single  flow . For each flow  j  (i.e.,the traffic to destination  D j ), we define variables  rte j,e  such that rte j,e  = 1 iftheredundancy-aware routefrom  S   to D j  goes throughedge  e , and  rte j,e  = 0  otherwise. Binary values for  rte j,e  ensurethat all traffic between  S   and  D j  is routed along one path.Weuseavariable FP  i,e  foranedge e anddistinct packet P  i  tode-note the  footprint   of the copies of   P  i  on edge  e . The footprint is theamount of resources consumed on edge  e  when the copies of   P  i  arerouted toward their respective destinations using redundancy-awareroutes and all routersperform redundancy elimination. For instance,if none of the copies of   P  i  is routed over  e , then the footprint dueto  P  i  and its copies on edge  e  is zero, i.e.,  FP  i,e  = 0 . If multiplecopies of   P  i  are routed over the edge  e , then effectively only onecopy goes through  e because the remaining copies are eliminated asbeing redundant. In thiscase, the footprint of the copies of   P  i  on theedge  e  is a function of the size of the distinct packet  P  i . In this pa-per, we pick the footprint function to equal the size of the packet  P  i multiplied by the latency of the link   e , or  FP  i,e  =  lat e ×| P  i | . Theintuition behind choosing this function is that a packet consumesmore network resources overall when it traverses high latency linksand/or when the packet size is large. Other functions reflecting net-work usage can also be considered.The ISP’s objective in computing redundancy-aware routes is tocompute the  rte  variables such that  total footprint summed over allnetwork edges is minimized  . In order words, the goal is to com-pute routes from  S   which minimize the total network resources con-sumed by traffic srcinating at  S   within the interval  T   when all rou-ters perform redundancy elimination.We formulate the ISP’s objective as the output of a  Linear Pro-gram (LP) . We first describe the  constraints  for the LP, followed bythe  optimization objective . We have the following constraints perdistinct packet  P  i , based on the definition of the footprint function: ∀  j,FP  i,e  ≥ lat e × cpy i,j × rte j,e ×| P  i | Since the footprint  FP  i,e  cannot exceed resources consumed whenrouting a single copy of   P  i  on  e  , we have,  FP  i,e  ≤| P  i |× lat e .Next, we set up flow conservation constraints for nodes in  V   . Forbackbones routers  v , we have:  ∀  j, P e ∈ δ + ( v ) rte j,e  = P e ∈ δ − ( v ) rte j,e , where,  δ  + indicates flow entering node  v , and  δ  − indicatesflow leaving node  v . For source  S   and destinations  D j , we have: ∀  j, P e ∈ δ − ( S ) rte j,e − P e ∈ δ + ( S ) rte j,e  = 1 ∀  j, P e ∈ δ + ( D j ) rte j,e − P e ∈ δ − ( D j ) rte j,e  = 1 Finally, we require a set of constraints to ensure that link capacitiesare obeyed. Suppose edge  e  cannot carry more than  Cap e  packetswithin the interval  T   ( Cap e  can be derived from  e ’s raw capacity).Then, we require:  ∀ e,  1 lat e P n FP  n,e  ≤  Cap e . We use a normal-izing factor  1 lat e to obtain the total size of packets carried by  e .The objective of the LP is to lower the  total network footprint  subject to the above constraints, or Minimize P e P i FP  i,e .We allow fractional values for the variables  rte in the solution forthe above LP. Fractional values indicate how traffic may split acrossdifferent possible paths between  S   and a destination. 3.2 Multiple Ingresses, Traffic Engineering We extend the above approach for computing redundancy-awareroutes to a network-wide setting. The goal is to use redundancy-awareness to help ISPs meet their traffic engineering goals moreeffectively. Our network-wide approach tries to always obtain bet-ter network-wide guarantees than existing TE approaches, such asOSPF-based weight tuning [6]. We illustrate our approach using the“Maximum load” objective, wherein the ISP employs traffic engi-neering to minimize the maximum link utilization in its network.Traffic can srcinate at any network PoP.To explain our approach, we introduce a per-ingress parameter cpy P  n,i ,D j  which is 1 if a copy of distinct packet  P  n,i  is destinedfor D j .  P  n,i  denotes the i th distinct packet srcinating from ingress S  n  within an interval  T  . Similarly we extend the link footprint vari-able to capture the contribution of packets srcinating from differentingresses toa particularlink  e ; wedenote thisas FP  P  n,i ,e . In asim-ilar fashion, we define variables  rte S n ,j,e  which identify if the flowbetween  S  n  and  D j  flows through edge  e . We assume that pack-ets srcinating from different ingresses have no content in common.(We omit several details for brevity.)Aswiththe singleingress case, we firstformulate anetwork-wideLP where the objective of the ISP is to lower the network footprintdue to traffic srcinating from all PoPs, or Minimize P e P i P n FP  P  n,i ,e . Next, we place link capacity constraints and incorporatethe “Max Load” objective as follows: Suppose that, based on themeasured network traffic matrix, the ISP estimates that traditionaltraffic engineering approaches (e.g. OSPF-based approaches [6,12]) can bound the link loads by a factor  α <  1 . ISPs today tryto maintain  α ≈ 0 . 5 . Given this, we normalize each link’s capacity Cap e  using  Cap e  ←  Cap ′ e  =  αCap e  and minimize the network-wide footprint subject to the following new capacity constraints: ∀ e,  1 lat e X i X n FP  P  n,i ,e  ≤ Cap ′ e The solution to this LP is the set of   rte S n ,j,e  variables. Due to nor-malization of capacities and our objective of minimizing network footprint, the maximum link load due to this solution is at least asgood as, if not much better, compared to traditional TE approaches. 3.3 Centralized Route Computation andPractical Issues Ourredundancy-aware approaches canbeappliedbyanISPalong-side centralized routing platforms such as RCP [8] and 4D [13]. At regular  N   minute intervals, border routers in the ISP can computethe redundancy profiles (i.e. the constants  cpy P  n,i ,D j ) for packetsobserved during the first  T   seconds of the interval, and transmit theprofiles to a logically central route control platform. We discusshow to track the redundancy profiles, especially when packet con-tents may not be duplicated in full, towards the end of this section.The controller formulates the network-wide Linear Program andcomputes the routes (i.e. the  rte S n ,j,e  variables). The computedroutes are configured on routers and used for rest of the  N   minutes.Inaddition to theperiodic updates, border routers could alsotrack the redundancy profiles on a minute-by-minute basis and inform theroute controllers of significant changes in thetrafficredundancy pat-terns. This allows the controller to respond better to sudden trafficsurges and changes in redundancy profiles. 3.3.1 Scalability The input to the network-wide Linear Program includes the con-stants for the redundancy profiles. The input size thus grows lin-early in number of distinct packets observed during the interval  T  .The size can be prohibitively large if millions of distinct packets ap-pear in a short time. The amount of data to be transmitted to thecentral route controller will be high, resulting in excessive controloverhead. Also, existing LP solvers cannot handle large input sizes.We employ two simplifications to address the scalability issues.  Based on an empirical study of real traces, we observed that contentat an ingress PoP is rarely duplicated across  ≥  3  destination PoPs(Section 5). Thus, we only consider content duplicated across 2destinations and ignore duplication across  >  2  destinations.We make another simplification to improve the scalability. We“combine” the redundant content in packets going to an identicalset of destinations into a larger  aggregated packet  ; copies of theaggregated packet are considered to be destined for the same setof destinations as the individual packets. For example, suppose thatdistinct packets P  1 ,...,P  l  all have twocopies, withone copy goingto destination  D 1  and another to  D 2  (all traffic is observed in a timeinterval T  ). Wecreate anequivalent  single aggregated packet of size P l 1  P  i  which goes to destinations D 1  and D 2 . Thus, the aggregatedpacket captures the total overlap in the content going to  D 1  and  D 2 .Thisaggregation approach reduces thetotal number of   cpy variableswithout changing the quality of the solution obtained for the LP —the number of variables reduces from  2 l  to 2 in the above example.With these two simplifications, the total number of variables forthe entire network is now on the order of the square of number of PoPs in the network and the control overhead is thus much smaller.We refer to the redundancy profiles captured using aggregated pack-ets in the above fashion as the  aggregated redundancy profiles .Next, we describe an approximate approach for computing theaggregated redundancy profiles at the ingress routers of an ISP net-work as packets stream into a network. We also address issues aris-ing from content being partially replicated across network packets. 3.3.2 Computing Redundancy Profiles We discuss an extension to the algorithm in Section 2.1 to com-pute theaggregated profilesinpractice. Theapproach wedescribe isrun constantly on each ingress router. Suppose an incoming packet P   at aningress routerhas amatch witha singlepacket  P  cache  storedat the router, and that  P   and  P  cache  are headed for different des-tinations  D 1  and  D 2 . We count the size of the matching region | P   ∩ P  cache | towards the total amount of content  common to desti-nations  D 1  and   D 2 . If   P   and  P  cache  are both headed to the samedestination, say D 1 , then we count | P  | + | P  cache |−| P  ∩ P  cache | to-wards content exchanged between the ingress and  D 1 ; in this man-ner, we approximately track the total amount of   unique content ex-changed between the source and  D 1 . If theincoming packet P   has amatch withmorethanone cached packet, say P  1 ,cache  and P  2 ,cache ,we count each match region  separately  towards the redundancy pro-files; that is, we run the aforementioned tallying approach first for P  and  P  1 ,cache , and then for  P   and  P  2 ,cache . We also track packets inthe ingress router’s packet store which observe no matches duringthe interval  T  . We group such packets by their destination and com-pute the total size of the packets in each group. This total is thenadded to the total volume of unique content exchanged between theingress and the corresponding destination.At the end of interval  T  , the ingress router gathers aggregatedcounts for: (1) the size of content shared between pairs of egresses,and(2) thevolumeof uniquecontent exchanged withdifferentegresses.This forms the aggregated redundancy profile for the ingress PoP,and is transmitted to the route controller. Note that we only focuson content that is duplicated across 2 destinations, if at all.This approach clearly approximates the true redundancy profileas described in Section 3.1. However, our trace-based evaluation(Section 6) shows that the inaccuracies in our approach do not sig-nificantly affect the quality of the routes we compute. 3.3.3 MPLS Networks Asmentioned before, wepermitfractional solutionstothenetwork-wide Linear Program. The fractional solution can be implementedin MPLS-based networks by establishing the appropriate “traffictrunks”, or label switched paths (LSPs),between ISPPoPs [9]. Caremust be taken to construct LSPs and allot packets to them in aredundancy-aware manner. This is crucial in order to extract themaximum amount of redundant content from network traffic. Oth-erwise, packets may be alloted to LSPs in such a manner that redun-dant packets destined for different egresses are routed along LSPswhich have very few network links in common.While a thorough investigation of how to establish LSPs and al-locate packets is beyond the scope of this work, we have developeda preliminary redundancy-aware heuristic which seems to offer sat-isfactory performance in practice. The details can be found in [5]. 4. INTER-DOMAIN ROUTING In this section, we present redundancy-aware inter-domain rout-ingwhichcanhelpISPsminimizetheoverallimpact ofinter-domaintraffic on internal and peering links. We consider as “inter-domain”traffictheset of all packets traversingthe ISPwhose destinations areroutable only through peers of theISP.Weconsider two approaches:local and cooperative.The  local  approach applies to an ISP selecting its next-hop ASesin BGP, as well as the particular exit point(s) into the chosen nexthop. In this approach, an ISP aggregates its inter-domain trafficover a selected set of next hops and the corresponding exit points soas to aggregate potentially redundant traffic onto a small number of network links. Thus, the ISPcan significantlyreduce theimpact thatinter-domain traffic imposes on its internal and peering links. Tocompute routes in this manner, the ISP must track (1) the amount of redundant content that is common to different destination prefixesand (2) the route announcements from peers to the destinations.The cooperative approach applies to ISPs which are willing tocoordinate their inter-domain route selection decisions. In this ap-proach, the ISPs compute routes which minimize the  overall  impactof inter-domain traffic across the internal links of all ISPs involvedand the peering links between the ISPs. We explore the ideal bene-fits from cooperation and ignore important issues such as the needto maintain privacy of internal information. 4.1 Local Approach for an ISP The intra-domain approach, presented in Section 3, can be ex-tended in a straight-forward manner to perform next hop-AS selec-tion. This simply requires a change to the input network graph  G and the overall objective of the ISP. Our approach described belowfocuses on inter-domain traffic srcinating at a particular PoP in anISP and can be extended to all inter-domain traffic of the ISP. Wepresent the high level ideas and omit the details for brevity.The ISP’s network footprint objective encompasses the footprint FP  i,e  of both the internal edges of the ISP and its peering links.The input graph  G  = ( V,E  )  is constructed as follows: the set  V   is composed of three subsets  V   1 ,V   2 , and  V   3  (Figure 3).  V   1  is theset of all intra-domain routers or the PoPs of the ISP, including theingress PoP  S   where the inter-domain traffic srcinates.  V   3  is theset of destination prefixes  D 1 ,D 2 ,...,D m . These are the prefixesto which the inter-domain traffic from  S   must finally reach. We as-sume that the ISP computes aggregated redundancy profiles acrossthe  m destinations. To derive the ideal benefits of redundancy elim-ination, all possible destination ASes must be considered in the set V   3 . However, in practice, it may suffice to focus on just the top fewdestination prefixes by volume. Finally, the set  V   2  is composed of “intermediate nodes” which model possible next hop ASes for eachdestination, as well as their peering locations with the ISP.The set of edges,  E  , is composed of three subsets:  E  1 , the set of intra-domain edges,  E  2 , the full set of peering edges between the
Related Documents
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x