Next Generation Mesh Networks

 

The proper design of a network infrastructure should allow for a number of key traits that are very desirable in an overall network design. First, the infrastructure needs to provide redundancy and resiliency without a single point of failure. Second, the infrastructure must be scalable in both geographic reach as well as bandwidth and throughput capacity.

Ideally, as one facet of the network is improved, such as resiliency; it should also improve on bandwidth and throughput capacity as well. Certain technologies work on the premise of an active/standby method. In this manner, there is one primary active link – all other links are in a standby state that will only become active upon the primary links failure. Examples of this kind of approach are 802.1d spanning tree and its descendants rapid and multiple spanning trees in the layer 2 domain and non-equal cost distance vector routing technologies such as RIP.

While these technologies do provide resiliency and redundancy they do so at the assumption that half of the network infrastructure is unusable and that a state of failure needs to occur in order to leverage those resources. As a result, it becomes highly desirable to implement active/active resiliency wherever possible to allow for these resources to be used in the day to day operations of the network.

 

Active/Active Mesh Switch Clustering

 

The figure below illustrates a very simple active/active mesh fabric. As in all redundancy and resiliency methods, topological separation is a key trait. As shown in the diagram below the two bottom switches are interconnected by a type of trunk known as an ‘inter-switch trunk or IST, that allows for the virtualization of the forwarding database across the core switches. The best and most mature iteration of this technology is something known as Avaya’s Split Multi-Link Trunking or SMLT. First invented in 2001 and movning into its 3rd generation, this effectively creates a virtualized switch that is viewed as single switch by the other edge switches in the diagram. Due to this fact, the edges switches can utilize defacto or industry standard multiple link technologies such as Multi-Line Trunks (MLT) or link aggregation (LAG) respectively. Because of the fact that the virtualized switch cluster appears as a single chassis these links can be dual homed to the two different switches at the top of the diagram to deliver active/active load balanced connectivity out to the edge switches.

 

Fig.1 A simple Active/Active Mesh Switch Topology

 Due to the fact that all links are utilized there is far better utilization of network resources. Additionally, because of this active/active mesh design, the resiliency and failover times offered are exponentially faster than comparative active/standby methods.

While the diagram above illustrates a very simple triangulated topology, active/active mesh designs can become much more sophisticated, such as box, full mesh and mesh ladder topologies. These additional topologies are shown in the diagram below. The benefit of these is that as the network topology is extended, both resiliency and capacity need not be sacrificed.

                                        box                   full mesh              ladder mesh

Fig. 2 Extended Active/Active Mesh Topologies

 As can be seen by the diagram above, these topologies can be very sophisticated and provide a very high degree of resiliency while enhancing the over all capacity of the network.

 

Topological Considerations for Active/Active Mesh Designs –

 

Most network topologies consist of various regions that provide certain functions. Depending on the region, there may be different features required that are specific to that region. As an example, within the network core high capacity load sharing trunks are a requirement where as at the network edge features like Power over Ethernet (PoE) are required in order to supply DC voltage to power VoIP handsets or other such devices.

Typically, these regions are divided into three sections of the topology; the network Core, Distribution and Edge. Below are short descriptions of each region and the role that they play. It should be noted that the distribution region is not required in all instances and should be viewed as an option.

 

The Network Core –

 

In a typical topology model, the individual network regions are interconnected using a core layer. The core serves as the backbone for the network, as shown in Figure 3. The core needs to be fast and extremely resilient because every network region depends on it for connectivity. Hence, active/active mesh topologies such as SMLT provide a very valuable role here. Even though the Core and Distribution Layer may be the same hardware, their role is different and should be looked as logically different layers. Also, as note above, the distribution layer is not always required. In the core of the network a “less is more” approach should be taken. A minimal configuration in the core reduces configuration complexity limiting the possibility for operational error. Ideally the core should be implemented and remain in a stable state with minimal adjustments or changes.

Fig 3. Simple Two Tier Switch Core

 The following are some of the other key design issues to keep in mind:

Design the core layer as a high-speed, Layer 3 (L3) or Layer 2 (L2) switching environment utilizing only hardware-accelerated services. Active/active mesh core designs are superior to routed and other alternatives because they provide:

Faster convergence around a link or node failure.

Increased scalability because neighbor relationships and meshing are reduced.

More efficient bandwidth utilization.

Use active/active meshing as well as topological distribution to enhance the overall resiliency of the network design.

Avoid L2 loops and the complexity of L2 redundancy, such as Spanning Tree Protocol (STP) and indirect failure detection for L3 building block peers.

If topology requires, utilize L3 switching in the active/active mesh core to provide for optimal sizing of the MAC forwarding table within the network core.

The Distribution Layer –

Due to the scale and capacity of active/active mesh core designs, the distribution layer is optional. It is far more efficient to dual home the network edge directly to the network core. This approach negates any aggregation or latency considerations that come in to play by the use of a distribution layer. The active/active mesh topology provides better utilization of trunk feeds and capacity can be scaled by multiple links in a dual homed fashion.

While the ideal topology is what is termed as a two tier design, it is some times necessary to introduce a distribution layer to address certain topology or capacity issues. Instances where a distribution layer might be entertained in a design are as follows:

  • ·         Where the required reach is outside of available trunk distances.
  • ·         Where the port count capacity in that portion of the network core can not support all of the edge connections without expansion and expansion in the core is not desired.
  • ·         Where logical topology issues such as Virtual LAN’s or port aggregation require it

It should be noted though that all of the above instances could be addressed by the expansion of the network core. Examples if this are moving from a dual to a quad core design or going further, moving to a mesh ladder topology as shown in figure 2.
In any instance it is more desirable to maintain a two tier rather than a three tier design if possible. The overall design of the network is far more efficient and resiliency convergence times become optimized. The diagram below shows a three tier design that utilizes an intermediate distribution or aggregation layer.

Fig. 4. Simple Three Tier Network

Note that topologies can be hybrid. As an example, most of the network can be designed around a two tier architecture with one or two regions that are interconnected by distribution layers for one or more of the reasons noted above.

The Network Edge

The access layer is the first point of entry into the network for edge devices, end stations, and IP phones (see Figure 5). The switches in the access layer are connected to two separate distribution layer switches for redundancy. If the connection between the distribution layer switches is to an active/active mesh, then there are no loops and all uplinks actively forward traffic.

A robust edge layer provides the following key features:

High availability (HA) supported by many hardware and software attributes.

Inline power for IP telephony and wireless access points, allowing customers to converge voice onto their data network and providing roaming WLAN access for users.

Foundation services.

The hardware and software attributes of the access layer that support high availability include the following:

Default gateway redundancy using dual active/active connections to redundant systems (core or distribution layer switches) that use industry standard or vendor specific Load Balancing or Virtual Gateway protocols such as VRRP or Avaya’s VRRP w/ Backup Master or R/SMLT. This provides fast failover of default gateway and IP paths. Note that with an active/active core or distribution mesh topology link and node resiliency and convergence are handled by the L2 topology which is much faster than any form of L3 IP routing convergence. As a result, any failover within the active/active mesh is well within the L3 routed timeout.

Operating system high-availability features, such as Link Aggregation or Multi-Line Trunks, which provide higher effective bandwidth that leverages on the active/active mesh while reducing complexity.

Prioritization of mission-critical network traffic using QoS. This provides traffic classification and queuing as close to the ingress of the network as possible.

In figure 5 the diagram illustrates a build out of a hybrid two/three tier network showing active/active load sharing interconnections with all network edge components.

Fig 5.  Full Resilient Active/Active Network Topology

Also note, that as shown in figure 5, active/active connections can also be established within the Data Center via top of rack switching to facilitate load sharing highly resilient links down to server nodes. Again, such resiliency is provided at L2 and is totally independent of the overlying IP topology or addressing.

 

 

Provisioned Virtual Network Topologies –

An evolution of active/active mesh topologies is provided by the ratification of IEEE 802.1aq “Shortest Path Bridging” or SPBm (the ‘m’ standing for MAC in MAC – IEEE 802.1ah) for short. This technology is an evolution of earlier carrier grade implementations of Ethernet bridging that were designed for metro and regional level reach and scale. The major drawbacks of these earlier methods were that they were based on modified spanning tree architectures that made the network complex to design and scale. IEEE 802.1aq resolves these issues with the implementation of link state adjacencies within the L2 switch domain in a manner that is the same as occurs by L3 link state adjacencies such as IS-IS and OSPF. All nodes within the SPB domain (which use ISIS to establish adjacencies) then run Dykstra to establish the shortest path to all other nodes in the active/active mesh cloud. Reverse Path Forwarding Checks provide for the ability to prevent loops in all data forwarding instances in a manner that is very similar to that provided in L3 routing. IEEE 802.1aq provides a cornerstone technology for Avaya’s Virtual Enterprise Network Architecture or VENA. The VENA framework utilizes SPBm as a foundational technology for many next generation cloud service models that either offerable today or currently under development at Avaya.

This next generation virtualization technology will revolutionize the design, deployment and operations of the Enterprise Campus core networks along with the Enterprise Data Center. The benefits of the technology will be clearly evident in its ability to provide massive scalability while at the same time reducing the complexity of the network. This will make network virtualization a much easier paradigm to deploy within the Enterprise environment.

Shortest Path Bridging eliminates the need for multiple protocols in the core of the network by separating the connectivity services from the protocol infrastructure. By reducing the core to a single protocol, the idea of build it once and don’t have to touch it again becomes a true reality. This simplicity also aides in greatly reducing time to service for new applications and network functionality.

The design of networks has evolved throughout the years with the advent of new technologies and new design concepts. IT requirements drive this evolution and the adoption of any new technology is primarily based on the benefit it provides versus the cost of implementation.

The cost in this sense is not only cost of physical hardware and software, but also in the complexity of implementation and on-going management. New technologies that are too “costly” may never gain traction in the market even though in the end they provide a benefit.

In order to change the way networks are designed, the new technologies and design criteria must be easy to understand and easy to implement. When Ethernet evolved from a simple shared media with huge broadcast domains to a switched media with segregated broadcast domains, there was a shift in design. The ease of creating a VLAN and assigning users to that VLAN made it commonplace and a function that went without much added work or worry. In the same sense, Shortest Path Bridging allows for the implementation of network virtualization in a true core distribution sense.

 

 

The key value propositions for IEEE 802.1aq SPBm include:

 

-          Standards-based

-          IEEE 802.1aq standard

-          Unmatched Resiliency

-          Single robust protocol with sub-second failover

-          Optimal network bandwidth utilization

-          Simplicity

-          One protocol for all network services

-          Plug & Play deployment reduces time to service

-          Scalability

-          Evolved from Carrier with Enterprise-friendly features

-          Separates infrastructure from connectivity services

-          Flexibility

-          No constraints on network topology

-          Easy to implement virtualization

There are some major features within SPBm that lend themselves well to a scalable and resilient enterprise design. Two major points are as follows:

1). Separation of the Core and the Edge

SPBm implements IEEE 802.1ah ‘MACinMAC’ which provides for a boundary separation between data forwarding methods in the network core versus the edge. It provides for a clear delineation between the normal Ethernet ‘learning bridge’ environment which is required for local area network operations and the SPBm Core network cut-through switching environment where performance and optimal path selection are the key most important criteria. As a result, the use of SPBm creates a core network that creates smaller edge forwarding environments where the MAC tables are effectively isolated. Within the actual SPBm core network itself the only MAC addresses within the forwarding tables are those of the SPBm switches themselves. As a result, the IEEE 802.1aq SPBm Core is very high performance and very scalable. It is also able to utilize multiple forwarding paths and provide for clear delineation between the network core and edge.

2). Virtual Provisioning Fabric

As noted earlier, IEEE 802.1aq evolved from earlier carrier grade implementations for Provider Based Bridging. There are two things that are key to a provider based offering. First, no customer should ever see another customer’s traffic. There needs to be complete and total services separation. Second, there must be a robust and detailed method for Operation and Maintenance (OAM) and Configuration and Fault Management (CFM) which is addressed by IEEE802.1ag and is used by SPBm for those purposes..

The first requirement is addressed by SPBm’s ability to create isolated data forwarding environments in a manner that are similar to VLAN’s in the traditional learning bridge fashion. In the SPBm core there is no learning function required. As such, these forwarding paths provide for total separation and allow for very determinate forwarding to associated resources across the SPBm core. These paths, termed as Instance Service Identifiers or I-SID’s allow for the ability to provision virtual network topologies that can be of a very wide variety of forms.

In addition, due to the established topology of the SPBm domain, the creation of these I-SID’s are provisioned at the edge of the SPBm cloud. There is no need to go into the core to any provisioning to establish the end to end connectivity. This contrasts with normal VLANs which require each and every node to be configured properly.

The figure below shows the dichotomy of these two features and how they relate to the network edge and in this case a distribution layer.

Fig. 6  MAC-in-MAC and I-SID’s within SPBm

As an example, I-SID’s can be used to connect Data Centers together with very high performance cut through dedicated paths for things such as Virtual Machine Migration, Stretch Server Clusters or Data Storage Replication. The figure below illustrates the use of L2 I-SID in this fashion

 Fig. 7. End to end IEEE 802.1aq L2 I-SID providing a path for V-Motion

Additionally, complete Data Center architectures can be built that provide for all of the benefits of traditional security perimeter design but with the benefits full virtualization of the network infrastructure. The figure below shows a typical Data Canter design implemented by inter-connected I-SID’s in a Shortest Path Bridging network. This effectively shows that not only is SPBm an ideal core network technology, it is also an optimal data center bridging fabric.

Fig. 8. Full Data Center Security Zone

 

Finally, complex L3 topologies can be built on top of SPBm that can utilize traditional routing technologies and protocols or can provide for the networks L3 forwarding requirements by the use of the native L2 link state routing within SPBm provided by IS-IS. The illustration below shows a network topology in which all methods are utilized to provide for a global enterprise design.

Fig. 9  Full end to end Virtualized Network Topology over an IEEE802.1aq cloud

Shortest Path Bridging Services Types

Avaya’s implementation of Shortest Path Bridging provides a tremendous level of flexibility to support multiple service types simultaneously, singly or in tandem.

One of the key advantages of the SPB protocol is the fact that network virtualization provisioning is achieved by just configuring the edge of the network, thus the intrusive core provisioning that other Layer 2 virtualization technologies require is not needed when new connectivity services are added to an SPB network.

Shortest Path Bridging Layer 2 Virtual Services Network (L2 VSN)

Layer 2 Virtual Services Networks are used to transparently extend VLANs through the backbone.  A SPB L2 VSN topology is simply made up of a number of Backbone Edge Bridges (BEB) used to terminate Layer 2 VSNs. The control plane uses IS-IS for forwarding at a Layer 2 level. Only the BEB bridges are aware of any VSN and associated edge MAC addresses while the backbone bridges simply forward traffic at the backbone MAC (B-MAC) level.

Figure 10. L2 Virtual Service Networks

A backbone service Instance Identifier (I-SID) used to identify the Virtual Services Network will be assigned on the BEB to each VLAN. All VLANs in the network sharing the same I-SID will be able to participate in the same VSN.

 

Shortest Path Bridging Inter-VSN Routing (Inter-ISID Routing)

Inter-VSN Routing allows routing between IP networks on Layer 2 VLANs with different I-SIDs. As illustrated in the diagram below, routing between VLAN 10, VLAN 100 and VLAN 200 occurs on one of the SPB core switches in the middle of the diagram. 

Figure 11. Inter-VSN routing

Although in the middle of the network, this switch provides “edge services” and has I-SIDs and VLANs provisioned on it, and therefore is designated as a BEB switch.  End users from the BEB switches as shown on the right and left of the diagram are able to forward traffic between their respective VLANs via the VRF instance configured on the switch shown.  For additional IP level redundancy, Inter-VSN Routing may also be configured on another switch and both can be configured with VRRP to eliminate single points of failure.

 

Shortest Path Bridging Layer 3 Virtual Services Network (L3 VSN)

A SPB L3 VSN topology is very similar to a SPB L2 VSN topology with the exception that a backbone service Instance Identifier (I-SID) will be assigned at a Virtual Router (VRF) level instead of at a VLAN level. All VRFs in the network sharing the same I-SID will be able to participate in the same VSN. Routing within a single VRF in the network occurs normally as one would expect.  Routing between VRF’s is possible by using redistribution policies and injecting routes from another protocol, i.e., BGP even if BGP is not used within the target VRF.

Figure 12. L3 Virtual Service Networks

Layer 3 Virtual Service Networks provide a high level of flexibility in network design by allowing IP routing functionality to be distributed among multiple switches without proliferation of multiple router-to-router transit subnets.

 

SPB Native IP shortcuts

The services described to this point require the establishment of Virtual Service Networks and their associated I-SID identifiers.  IP Shortcuts enables additional flexibility in the SPB network to support IP routing across the SPB backbone without configuration of L2 VSNs or L3 VSNs.

 

Figure 13. Native IP GRT Shortcuts

IP shortcuts allow routing between VLANs in the global routing table/network routing engine (GRT). No I-SID configuration is used.

Although operating at Layer 2, IS-IS is a dynamic routing protocol.  As such, it supports route redistribution between itself and any IP route types present in the BEB switch’s routing table.  This includes local (direct) IP routes and static routes as well as IP routes learned through any dynamic routing protocol including RIP, OSPF and BGP.

IP routing is enabled on the BEB switches, and route redistribution is enabled to redistribute these routes into IS-IS.  This provides normal IP forwarding between BEB sites over the IS-IS backbone.

 

 BGP-Based IP VPN and IP VPN Lite over Shortest Path Bridging

Avaya’s implementation of Shortest Path Bridging has the flexibility to support not only the L2 and L3 VSN capabilities and IP routing capabilities as described above, but also supports additional IP VPN types.  BGP-Based IP VPN over SPB and IP VPN Lite over SPB are features supported in the Avaya implementation of Shortest Path Bridging. 

Figure 14. BGP IP VPN over IS-IS

BGP IP VPNs are used in situations where it is necessary to leak routes into IS-IS from a number of different VRF sources.  Additionally, using BGP IP VPNs support over SPB, it is possible to provide hub and spoke configurations by manipulating the import and export Route Target (RT) values. This allows, for example, a server frame in a central site to have connectivity to all spokes, but, no connectivity between the spoke sites. BGP configuration is only required on the BEB sites where the backbone switches have no knowledge of any Layer 3 VPN IP addresses or routes.

 

Resilient Edge Connectivity with Switch Clustering Support

As earlier described, the boundary between the MAC-in-MAC SPB domain and 802.1Q domain is handled by the Backbone Edge Bridges (BEBs). At the BEBs, VLANs are mapped into I-SIDs based on the local service provisioning.

Figure 15. Resilient edge switch cluster

Redundant connectivity between the VLAN domain and the SPB infrastructure is achieved by operating two SPB switches in Switch Clustering (SMLT) mode. This allows dual homing of any traditional link aggregation capable device into a SPB network. 

Switch Clustering provides the ability to dual home any edge device that supports standards-based 802.1ad LACP link aggregation, Avaya’s MLT link aggregation, EtherChannel or any similar link aggregation method.  With Switch Clustering, the capability is provided to fully load balance all VLANs across the multiple links to the switch cluster pair.  If either link as depicted fails, all traffic will instantly fail over to the remaining link.  Although two links are depicted, Switch Clustering supports LAGs up to 8 ports for additional resiliency and bandwidth flexibility. 

 

Quality of Service Support and Traffic Policing and Shaping Support

Quality of Service (QoS) is maintained in a SPB network the same way any IEEE based 802.1Q network is operated. Traffic ingressing a SPB domain which is either already 802.1p bit marked (within the C-MAC header), or is being marked by an ingress policy (remarking), is getting its B-MAC header p-bit marked to the appropriate value.

Figure 16. QoS & Policing over SPB

The traffic in the SPB core is scheduled, prioritized and forwarded according to the 802.1p values in the backbone outside packet header. In the case where traffic is being routed at any of the SPB nodes, the IP Differentiated Services DSCP values are taken into account as well.

The number of I-SID’s available in an SPBm domain are virtually limitless (16 million). Additionally, this technology can be effectively extended over many forms of transport such as dark or dim optics, CWDM or DWDM, MPLS L2 pseudo-wires, ATM and others. This means that it can effectively cover vast geographies in its native form and provide all of the virtualization benefits where ever it reaches.

In instances where required however an SPBm domain can effectively interface to a traditional routed WAN by the use of standard interior and border gateway protocols.

Provider Type Services offerings and larger regional topologies

In instances where larger geographic coverage is desired to leverage IEEE 802.1aq and its inherent provisioned core approach the traditional mash topology has difficulty in scaling due to costs in optical infrastructure and point of presence. In these instances ring based topologies make the most sense. IEEE 802.1aq can not only support ring topologies but can also support various interesting iterations such as dual core rings or the more esoteric 3D torus topology which is intended to support very high core port densities.

The next section of this document will discuss the various ring topology options as well as the combination of their use. The diagram below illustrates the basic components for the dual core ring. There are two basic assumptions in the design. First, the core ring topology is populated with only Backbone Core Bridges (BCB’s). This optimizes one of the key traits of Shortest Path Bridging – separation of core and edge. The result is a design of immense scale from a services perspective. Second, all provisioned service paths are applied at the edge in the Backbone Edge Bridges (BEB’s) which provides the interface to the customer edge.

Figure 17. Basic Dual Core components

As we look below at a complete topology we can see that a very efficient design emerges which uses both minimal node and fiber counts as well as effectively leverage on shortest paths across the topology. Each BEB is dual homed back into the ring fabric by SPB trunks. As such there are multiple options for dual homing the BEB node back into the ring topology.

Figure 18.  A Basic Dual Core Ring

An additional level of differentiation can be provided by the use of a dual home active/active mesh service edge. In this type of edge shown below, there are two BEB’s which are trunked together with active/active Inter-Switch Trunks. These two switches then provide a clustered edge that interoperates with any industry standard dual homing trunk method such as MLT or LAG. The end result is a very high level of mesh resiliency directly down to the customer service edge.

Figure 19. Dual Homed Active/Active Mesh Edge

The diagram below shows a dual core ring design that implements various forms of dual homed resiliency. These can range from simple dual homing of the BEB to a very highly resilient inter-area active/active edge design that can provide sub-second failover into the provider cloud. Again, this supports industry standard methods for active/active dual homing of the Ethernet service edge.

 Figure 20. Dual Core Ring with various methods of dual homed resiliency

More complex topologies can be designed when higher densities of backbone core ports are required. The topology below illustrates a 3D torus design that links together triad nodal areas to build a very highly resilient and dense core port capacity ring.

Figure 21. 3D Torus Ring

As the diagram below shows, the basic construct of the 3D torus is fairly simple and is comprised of only six core nodes. The dotted lines show optional SPB trunks to provide enhanced shortest path meshing. With these optional trunks every node is directly connected for shortest path forwarding.

Figure 22. 3D Torus Section

These sections can be linked together to build a complete torus as shown above, or used in a hybrid fashion as shown below to build up or down core port densities as required by subscriber population. The illustration below shows a hybrid ring topology that scales up or down according to population and subscriber density requirements.

Figure 23. Hybrid Ring Topology

As this section illustrates, IEEE 802.1aq is an excellent technology for regional and metropolitan area networks. It allows for scalability and reach as well as a great degree of flexibility in supported topologies. Moreover, these different degrees of scale can be accomplished in the same network without any degree of sacrifice to the overall resiliency of the whole.

Provisioned Virtual Service Networks

As mentioned earlier, IEEE 802.1aq offers several methods of service connectivity across the SPB cloud. In the context of a service offering however, the use of I-SID’s will have a different focus. Rather than a departmental or organizational focus as was used in the above example, here we are concerned with shared service offerings or services separation. As an example, in the area of voice service offerings, a service may be shared in that it is much like the PSTN only over IP. In contrast, a service might be offered for a virtual PBX service for a private company that would expect that service to be dedicated. The figure below shows how IEEE 802.1aq can easily provide the dedicated service paths for both modes of service offering. The PSTN service I-SID offering is shown in green while the private virtual PBX service I-SID is shown in red.

Figure 24.  Shared vs. Dedicated Services

 

In typical deployment an offering of services might be as follows –

Private Sector – Voice/Shared – Video/Shared – Data/Shared

Business – Voice/Private – Video/Shared – Data/Private

These are of course general and can be customized to any degree. The diagram below shows how the use of IEEE802.1aq I-SID’s allows for the support of both service models with no conflict. Note that the private sector shares a common I-SID for video services with the business sector. Also note that the business sector profile allows for the use of a dedicated virtual PBX service that is private to that business.

Figure 25.  Voice and Video I-SID’s across SPB

Figure 26.  Multiple ‘Service Separated’ data service paths across SPB

The illustration above highlights the data networking services. Note that the private sector is using a shared I-SID (shown in green) much as is done today with DOCSIS type solutions. Note also that the business is using L3 I-SID’s with VRF’s to build out a separate private and dedicated IP topology over the IEEE 802.1aq managed offering. This creates separate and discrete data forwarding environment that are true ‘ships in the night’. They have no ability to support end to end communications unless the routing topology explicitly allows it. As such all of the traditional IT security frameworks such as firewalls and intrusion detection and prevention come into play and are used in a rather traditional fashion to protect key corporate resources. On the private residential space, end point anti-virus & protection as is typical with ISP’s today.

 

IP Version 6 Support

Introducing new technology is always a move into the unknown. IPv6 is no different. While the technology has been under development so some time (over ten years), there has been no great impetus that has been the motivation for large scale adoption. This is changing now that IANA/ARIN has announced that the last contiguous block of IPv4 addresses has been sold. Now it is down to non-contiguous blocks and recycling of address blocks. These efforts will not provide any significant extension to the availability of IPv4 addresses. With these events, many organizations are now actively investigating how IPv6 can be deployed into their networks.

 

This section is intended to provide an overview of a tested topology over shortest path bridging (IEEE 802.1aq) environments for the distribution of globally routable IPv6 addressing using L2 VSN’s and inter-VSN routing.

The high level results of the work demonstrate that an enterprise can effectively use SPB to provide for the overlay of a routed IPv6 infrastructure that is incongruent to the existing IPv4 topology. Furthermore, with IPv4 default gateways resident on the L2 VSN’s, dual stack end  stations can have full end to end hybrid connectivity without the use of L3 transition methods such as 6to4, ISATAP, or Teredo. This results in a clean and simple implementation that allows for the use of allocated globally routable IPv6 addresses in a native fashion.

 

IPv6 in General –

 

IPv6 is the next generation form of IP addressing. Replacing IPv4 it is intended to provide greatly enhanced address space as well as end to end transparency which was becoming more and more difficult by the increasing use of Network Address Translation (NAT) in IPv4. NAT was created in order to provide for the use of ‘private’ IPv4 addressing within an organization and then allow for a gateway device to interface out to the public Internet. Even this technology however could not forestall the unavoidable event that occurred earlier this year contiguous blocks of IPv4 addresses have run out.

Currently, there are address recycling efforts that will provide some reprieve but in the immanent future even this effort will be exhausted.

These events have caused a recent surge in the interest in IPv6. Many enterprises that had it on the back burner are now taking a new look at this technology and the requirements that need to met for their organizations to deploy it. For the first time investigator this can be a daunting task. Beyond the knowledge of IPv6 itself, one needs to learn all of the methods required in order to co-exist in an IPv4 network environment. This is a strict requirement because no one will completely forklift their complete communications environment and even if they could there are issues with contact to the outside world that need to be addressed. The reason for this is that the IPv6 suite is NOT directly backwards compatible to IPv4. This complication has caused quite a bit of effort within the IETF to resolve. There are a number of RFC’s, drafts as well as deprecated drafts that cover a wide variety of translation or transition methods. Each has its own set of complications and security or resiliency issues that need to be dealt with. At the end of the day, most IT personnel walk away with a headache and wish for the good old days of just IPv4.

 

During the time since IPv6 was first introduced different schools of thought evolved as to how this co-existence between IPv4 and IPv6 could be addressed. Network and Port Translation (NAT-PT) came into vogue but has since faded off into deprecation as the approach has largely proved to be intractable. Other methods have stayed and even become ‘default’. As an example, all Microsoft OS’s running IPv6 run 6to4, ISATAP and Teredo tunneling methods.

So it has become clear. One school has won out and that school of thought is… dual stack in the end stations and tunneling across the IPv4 network to tie IPv6 islands together. These methods work, but as I pointed out earlier, they all have complications and issues that need to be dealt with.

If one looks at the evolution long enough though something else becomes apparent. If you could provide the paths between IPv6 islands by Layer 2 methods, things like 6to4, ISATAP and Teredo are no longer required. Furthermore, without these methods an enterprise is free to use formally allocated globally routable address space. The only requirement for the dual stack host is that they have clear default routes for both IPv6 and IPv4. With typical VLAN based networks however, this design while feasible does not scale and quickly becomes intractable due to the complications of tagged trunk design within the network core. With the evolution of Shortest Path Bridging (IEEE 802.1aq) this scalable layer two method is now available. The rest of this solution guide will describe the test bed environment and then discuss ramifications that this work has on larger network infrastructures.

 

The IPv6 over SPB Example Topology –

 

The figure below shows the minimal requirements for a successful hybrid IPv6 deployment over shortest path bridging. As can be seen the requirements are fairly concise and simple. You require an SPB Virtual Service Network configured which is then associated with edge VLAN’s. These VLAN’s will host dual stack end stations.

Addtionally, this VSN will need to attach to default IPv6 and IPv4 default gateways. Again, this would occur by the use of edge VLAN’s that interface to the relevant devices.

 

Figure 27. Required elements for a native hybrid IPv6 deployment over SPB

 

So as one can see the requirements are straightforward and easy to understand. We implemented the following topology in a lab to demonstrate the proposed configuration.

The diagram below illustrates this topology in a simplified form for clarity. 

 Figure 28. Native IPv6 Dual Stack over L2 VSN Test bed

 

In the test bed we implemented a common VSN that would support the IPv6 deployment. This was for simplicity only. More complicated IPv6 routed topologies can easily be achieved by using inter-VSN routing. Examples later in the brief will be shown where this is illustrated. In the lab we created VLAN ID 500 at the three different key points at the edge of the SPB domain. A Virtual Service Network was created within the SPB domain (also using 500 as its identifier) that ties the different VLAN’s together. At one edge VLAN a Win7 end station running dual stack had the IPv4 address of 10.40.99.2 and the IPv6 address of 3000::2. For IPv4 the end stations default gateway was 10.40.99.1 and for IPv6 the Default Gateway was 3000::1. The IPv6 default gateway is also attached to VLAN 500 and is able to provide directly routable paths in and out of the VSN. Additionally, the IPv4 default gateway is also attached and reachable as well. The dual stack end station enjoys end to end hybrid connectivity to both IPv6 and IPv4 environments without the use of any L3 transition method. In the topology shown in figure 3, we show that from the dual stack end stations perspective, there is complete hybrid connectivity and available routed paths to both IPv4 and IPv6 environments. Due to the fact that formally allocated global addressing is used there is connectivity out into INET2 to native IPv6 resources.

Figure 29. Dual Stack end stations perspective on default routed paths

 

The ramifications on larger IPv6 deployments

 

One of the major drawbacks of L3 transition methods for IPv6 is that they bind the IPv6 topology to IPv4. Many find this as undesirable. After all, why implement a new globally routed protocol and then lock it down to an existing legacy topology? As a result, it was realized very early on that if you could run IPv6 as ships in the night with IPv4 it would be a very good solution. The problem with this was that the only method to accomplish this was by the use of VLAN’s and tagged trunks or with routed overlays. As a result, while the previous test bed shown in figure 2 was feasible and provable, the approach quickly suffers from complexity in larger topologies and does lend itself well to scale.

With Shortest Path Bridging these issues are vastly simplified making this approach tractable on an enterprise scale. The reason for this is that the IPv6 deployment becomes an overlay L3 environment that rides on top of SPB. As such, there is no need to make detailed configuration changes to the network core to deploy it. This original ‘ships in the night’ vision can now be realized in real world designs.

 

The diagram below shows a large network topology that interconnects two data centers. The topology in blue shows the IPv6 native dual stack deployment. The topology in green shows the IPv4 legacy routed environment. Note that while there are common touch points between the two environments for legacy dual stack IPv4 use, the two IP topologies are quite independent of one another.

Figure 30. Totally Independent IP topologies

 

 

In Summary –

 

This document has provided a review of active/active mesh network topologies and the significant benefits that they bring to an overall network design. With networking speeds now at plus 10 Gb/s it is no longer sufficient to have very high speed expensive switch ports sitting in a totally passive state waiting for a network failure. It is also no longer sufficient to tolerate failover times in the range of seconds or even tenths or hundreths of seconds. The amount of data loss and the performance impacts are just too serious. Active/active mesh networking addresses this by providing for multiple load sharing paths across the network topology. Additionally, due to the active nature of the trunking method, SMLT can very easily provide for failovers in the subsecond range. As a note, recent testing of Avaya’s 3rd generation of SMLT reliably shows failovers in the range of 6 ms. This is practically instantaneous from the persapective of the overall network. This failover speed is unrivaled in the industry and is a testament to Avaya’s dedication to this technology space.

Additionally, newer active/active mesh technologies are being introduced such as IEEE 802.1aq Shortest Path Bridging – a key foundational component of Avaya’s VENA framework that promise to take active/active mesh network topologies into a new era of scale and flexibility never before realized with switched Ethernet topologies. The provisioned virtual network capability of VENA allows for one touch provisioing of the network serivce paths with zero touch requirements to the transport core. This new innovation not only vastly simplifies administration and reduces configuration errors. It can provide for dramatic improvements in IT OP/EX costs in that changes that would normally take hours are brought down to minutes with an exponential reduction in the probablity for error.

In addition, this paper has shown that this new addition to active mesh networking is totally complatible and complimentary with older active/active mesh switched Ethernet topologies such as SMLT. The results of the combination are a flexible core meshing technology that allows for almost unlimited permutations of topologies and a very highly resilient dual homed edge with sub-second failover.

Another more mundane but equally important aspect of Avaya’s SPBm offering is that it can be easily migrated to within their existing Ethernet Routing Switch 8600. The result of this upgrade is to make it the equivalent of an Ethernet Routing Switch 8800 which can participate in an SPBm domain as either a Backbone Edge Bridge (BEB) or a Backbone Core Bridge (BCB), including all service modes detailed earlier in this article. This mean that an existing ERS 8600 customer can implement the technology without the needs for a forklift upgrade.

Even when considering networks with alternative vendors, Avaya’s SPBm VENA framework – due to it’s strict compliance to IEEE 802.1aq and other IEEE standards – allows for the seamless introduction of SPBm into the network as a core distribution technology with minimal disruption to the network edge. Additionally, network edges that are Spanning Tree based today because of core networking limitations can then move to implement the active/active dual homing model spoken to earlier by the use of LAG or MLT at the edge, both of which are widely supported throughout the industry.

The end result is a technology that brings immense value.  It is easy to implement in both new and existing networks, and migration can be virtually seamless.

Could it be that the days of spanning tree have finally passed?

I would like to extend both credit and thanks to my esteemed Avaya colleagues, Steve Emert and John Vant Erve for both input and use of facilities for solution validation.

About these ads

One Response to “Next Generation Mesh Networks”

  1. Steve Emert Says:

    “Could it be that the days of spanning tree have finally passed?”

    I certainly would like to think so. Although the industry has worked very hard to modernize Spanning Tree Protocol (STP) through the use of proprietary extensions and the introduction of MSTP and RSTP, the fact remains that it is a very old protocol designed in the days of slow Ethernet bridges and low computing power.

    Avaya and their ancestor company for networking products has been in the business of “Spanning Tree avoidance” for ten years. For the majority of those years, it was performed by instead manually provisioning Split Multi Link Trunking switch clustering.

    With Shortest Path Bridging, we now have a very simple to implement, very scalable, very easy to manage and troubleshoot dynamic protocol for avoiding Spanning Tree. And best of all, it is already a ratified IEEE standard with several years of live network installation experience behind it (through the use of the metro Ethernet PLSB layer 2 analogue to SPB).

    I think the days of Spanning Tree indeed have finally passed!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 38 other followers

%d bloggers like this: