ACI and L4-7 Integration

I’ll start this post by stating that this is entirely an opinion piece. It’s certainly based on some real life experience but doesn’t necessarily reflect the opinion/position of anyone but myself.

Since its launch, one of the key features that has been touted (especially by marketing!) has been ACI’s integration to the greater layer 4-7 services ecosystem. Thats basically a fluffy way to say that ACI is intended to integrate with L4-7 devices such as firewalls, IPS, and load balancers/ADCs. That’s all well and good as of course any data center switch/platform would need to integrate with an A10/F5/etc., ACI however takes that to a new level by having native integration directly into the APIC platform. What does this actually mean, and do we care? Lets first talk about the three ways that we can integrate these third-party devices into the fabric.

  1. “Traditional”

Perhaps this should be called the “legacy” method, but we’ll stick with “traditional” as it sounds classy. Simply put we treat the fabric like we would treat any 5k/7k/Arista/Juniper/blah, and we run a trunk (or a vPC) up to the device – on top of this trunk/link we put a bunch of VLANs and/or we run a routing protocol. For ADCs this generally means one big vPC and a bunch of VLANs – the ADC then has an IP(s) in whatever subnets are associated to those VLANs (EPGs). For firewalls this is generally a Layer 3 out which is a routing construct in ACI - we just route to/from the firewall using different VRFs on each zone/nameif of the firewall to keep traffic isolated and then we prefer to reach the adjacent zones via the route learnt (or static) via the firewall. Pretty straightforward stuff, much like we do in traditional networks, just with fancy ACI words!

  1. Device Packages - Managed Service Graphs

This is/was the grand idea around L4-7 services in ACI. Some manufacturer (Cisco, F5, A10, Palo Alto, etc.) creates what is called a device package. This device package is really just a script that tells the APIC what the device is capable of doing, and how the APIC can tell the device what needs to happen. The device is then used in a service graph. The service graph is basically an outline of your service chain, or how you want traffic to flow through the fabric and the third-party devices. Lets say you have an example three-tier web app – you may wish to have a load balancer sit in front of the web tier, and a firewall sit between the web tier and the app tier. You would basically (I’m going to over simplify this quite a bit since this post is not about the technical how of device packages/service graphs), tell the fabric where you want the services inserted, and define some parameters about the service graph. You could pick lets just say a “one-armed” setting on the load balancer and define the VIP pool and real server IPs, and then on the firewall you would push some security policy outlining the particular flow. The APIC then makes API calls out to the third-party devices to configure all of what you just modeled (because it knows how to interact with these devices via the device package). More on this in a bit…

  1. Unmanaged Service Graphs

This is the latest, and perhaps greatest(?) flavor of L4-7 integration with ACI. As of the 1.2 code train we have the option for an “unmanaged service graph.” This is basically the same as the managed service graph, except the APIC will NOT configure any of the third-party devices. You still define them and model the flow in the controller, however you must manually configure the load balancer or firewall or whatever. I think the spirit here is that we are still modeling the application, the flow, and the dependencies in ACI, but shedding some of the complexities involved with having the APIC configure third-party devices.

Okay, so that’s the super quick and dirty run down of the ways we can integrate these load balancers and firewalls and the like into the ACI fabric. Now, lets talk a bit about the pros and the cons for each.

  1. “Traditional”
  • Pros
    • Not much changes, this is the same way we’ve been doing things for years.
    • It’s relatively simple to configure.
    • Easier for teams who are siloed (which is bad!).
    • Better from a CMP perspective (more on this later).
  • Cons
  • Not much changes, this is the same way we’ve been doing things for years.
  • Not very dynamic/programmatic.
  1. Device Packages - Managed Service Graphs
  • Pros
    • Automated provisioning of stuff – do things in the APIC then don’t worry about the firewall/ADC/etc.
    • Visual documentation of whats going on – the APIC is the central source of truth for the application and it’s basically self documenting.
    • Easier to do complex things – like insert a transparent device into the traffic flow. This could be done in the Traditional way but would take a lot of steps, the service graph is intended to minimize the effort there.
  • Cons
    • Vendor support:
      • It’s up to the third-party vendor to build and maintain the device package, what happens with code upgrades and supportability. The jury is still out on this, but it worries me.
      • Limited exposure to the APIC – not all features are exposed in a device package. What happens if you want to play with widget X, but it’s not in the device package? You’re back to manual configuration, no good.<
    • Hard for siloed teams since they’ve got to give up control of their firewall/ADC to the network team (or join the network team I suppose).
  1. Unmanaged Service Graphs
  • Pros
    • Visual documentation of whats going on – the APIC is the central source of truth for the application and it’s basically self documenting.
    • Less complicated/less dependencies than managed service graphs.
    • Easier for teams who are siloed (which is bad!).
    • Better from a CMP perspective (more on this later).
  • Cons
    • Not very dynamic/programmatic.

Okay so that’s my run down, I think that it’s probably pretty clear on where I stand on this at this point, but just in case it’s not let me explain.

Historically I’ve been a strong advocate for option 1 above. Device packages are dependent upon the third-party vendor — meaning Cisco does not write or maintain these device packages. Thats given me (personal opinion) some pause — what happens when I upgrade my L4-7 device, or upgrade the fabric, will there be any interoperability issues (this is relatively unlikely but I think about this possibility). Depending on the third-party vendor the device package may or may not be very robust; F5 for example has done, from what I’ve seen, the best job at making a fully featured device package, however this is not necessarily representative of others. So the potential for lost functionality and ultimately having to manually administer the L4-7 device to complete whatever task is at hand is a potential pitfall. Finally, being perfectly honest, the device package deployment and configuration is not the most straightforward thing — at the very least connecting a device to ACI and manually configuring an EPG is simpler.

Now that we have the unmanaged mode, thats quickly becoming the preferred method. With this option we get to have some visibility into where on the fabric L4-7 devices sit, and how and where they interact with EPGs. With unmanaged mode you still configure the service chain in a similar way as the managed mode, however gone are any complexities/pitfalls of having the fabric configure the third-party device.

In my opinion the bigger question here isn’t really whether or not you can manage L4-7 devices via ACI – but if you should. Most of the time, ACI is just one bit of the overall automation/orchestration push. Generally speaking at some point a cloud management platform (CMP) of some sort will come into play — this could be UCS-D, CliQr, or any other similar type platform. Once this platform is in the environment it is generally used as the central source of automation/orchestration — often times used in conjunction with some type of service catalog (Service Now is common). The CMP will make API calls to configure the various components required to execute a job or to build some environment/platform as permitted by the service catalog. Likely you’ll have API calls to the compute layer, the storage layer, and of course to ACI for the network and security (contracts) components. At this point the L4-7 bits come back into play — generally there will be something that has to be done on the F5 (for example) when deploying a new application via the CMP. At this point in our example the CMP has made API calls to three different platforms (compute, storage, network/security), and could easily be configured to make a fourth to the F5. IF we are managing the F5 via ACI, then we must make an API call to the APIC from the CMP, which in turn will kick off another API call from the APIC to the F5. Not only is this an unnecessary layer of abstraction, it’s also potentially difficult to handle and creates more work. Ideally the CMP could simply make an API call (perhaps it’s natively supported) to the L4-7 device and cut out the middle man of ACI.

Don’t get me wrong, ACI can absolutely handle the device package/managed mode, and could of course be used in conjunction with a CMP, I just personally think that its more work and complexity than it needs to be. So, in short, my $0.02 is to go with “Traditional” or Unmanaged Service Graph modes.