DEV Community

Nick Schmidt
Nick Schmidt

Posted on • Originally published at blog.engyak.net on

Unearned Uptime - Present and Future Design Patterns

After all that meatspace talk, let's look at a few technical solutions and why they might not meet business needs in a specific setting.

Shared Control Planes / Shared Failure Plane

Shared Control Plane design patterns are prolific within the networking industry - and there's a continuum. Generally, a control plane between devices should be designed with reliability in mind, but most shared control plane implementations tend to have "ease of administration" as intent instead of reliability. Here are some common examples.

Stacking

"Stacking" implementations represent an early industry pattern where (typically) campus deployments weren't entirely large enough to justify a chassis switch but still wanted enough lateral bandwidth to eliminate a worry point. Primary motivations for "stacking" were:

  • Single Point of Administration
  • Linear scale-out costs

Stacking was an artifact from when software like Ansible, Cisco DNA, ArubaOS-CX/NetEdit, etc. didn't exist from within the industry. Significant downsides exist to stacking software, including:

  • Tight coupling with software, often a total outage or a many-step ISSU upgrade path
  • Software problems take the whole stack down
  • Stacking cables are expensive and proprietary

Stacking is still a pretty good, viable technology for small to medium campus networks. One particular technology I have found interesting is Aruba's Spine and Leaf design, leveraging Aruba's mobility tunnel features to handle anything that needs to keep an IP address.

MC-LAG

Multi-Chassis LAG is a pretty contentious issue within the industry.

Note: In Service Provider applications, Layer 2 Loop Prevention is a foundational design pattern for delivering Metro Ethernet services by creating a loop-free single endpoint path. I'm not covering this design pattern, as it's a completely different subject. In this case, I'm illustrating Data Center/Private Cloud network design patterns, and then tangentially Campus from there.

MC-LAG as a design pattern isn't all that bad compared to some - however, some applications of MC-LAG in the data center turn out to be fairly problematic.

Modern Data Center Fabric Switching

Given the rise of Hyper-Converged Infrastructure - we're actually seeing data center hardware get used. Prior to this last generation (2012-onwards) just "being 10 Gig" was good enough for most use cases. Commodity server hardware wasn't powerful enough to really tax fabric oversubscribed switches.

...or was it? Anybody remember liking Cisco FEXes? TRILL? 802.3br?

Storage Area Networks (SAN) offloaded all compute storage traffic in many applications, and basically constituted an out-of-band fabric that was capable of 8-32Gbits/s.

The main problem here is Ethernet. Ethernet forwarding protocols aren't really capable of non-blocking redundant forwarding. This is because there is no routing protocol. Fiber Channel will use either IS-IS or SPF in most cases for this purpose, and hosts participate in this routing protocol.

The biggest change that this has - Fiber Channel can have two completely independent fabrics, devoid of interconnection. This allows an entire fabric to go completely offline with no issues.

MC-LAG goes in a completely different direction - forcing redundant Ethernet switches to share a failure plane. With Data Centers, the eventual goal for this design pattern is to move to this "share-nothing" approach, eventually resulting in EGP or IGP participation by all subtending devices in a fabric.

Now - we don't have that capability in most hypervisors today. Cumulus does have a Host Routing Implementation, but most common hypervisors have yet to adopt this approach. VMware, Amazon, Microsoft, and Cumulus all contribute to a common routing code base (FRRouting) and are using it to varying extents within their networks to prevent this "Layer 2 Absenteeism" from becoming a workload problem. Of these solutions - VMware's NSX-T is probably the most prolific solution if you're not a hyperscaler that can develop your own hypervisor / NOS combination like Amazon/Microsoft: https://nsx.techzone.vmware.com/

Closing Notes

Like it or not, these examples are perfectly viable design patterns when used properly. Given industry trends and some crippling deficiencies with Giant-Scale Ethernet Topologies in large-scale data center and campus networks, we as network designers must keep an eye to the future, and plan accordingly. In these examples, we examined (probably very for some) tightly coupled design patterns used in commodity networks, and where they commonly fail.

If you use these design patterns in production - I would strongly recommend asking yourself the following questions:

  • What's the impact of a software upgrade, worst-case?
  • What happens if a loop is introduced?
  • What's the plan for removing that solution in a way that is not business invasive?
  • What if your end-users scale beyond the intended throughput/device count you anticipated when performing that design exercise?
Hopefully, this explains some of the why behind existing trends. We're moving to a common goal - an automatable, reliable, vendor-independent fabric for interconnection of network devices using common protocols - and nearly all of the weirdness around this can be placed at the networking industry's feet - We treat BGP as this "protocol of the elites" instead of teaching people how to use EGPs. We (the networking industry) need to do more work to become more accessible to adjacent industries - They'll be needing us really soon if they don't already.

Top comments (0)