OpenContrail consists of two parts. One part is the virtual router (vRouter) which sits in the hypervisor of virtualized servers. The other part is a logically centralized SDN controller which provides north bound REST APIs for managing the network.
The fact that the OpenContrail controller is logically centralized simplifies network management a lot. Instead of having to manage lots of discrete devices, you have a single point of management. (Whilst OpenContrail is logically centralized it is actually physically distributed: it is implemented as a cluster of nodes for high availability and scale-out.)
But the real key advantage of OpenContrail is that it allows you to manage the network at a high level of abstraction. That is the topic of this blog post.
What does this rather academic sounding statement “management at a high level of abstraction” really mean?
Traditionally, when you want to deploy some complex scenario such as a Layer 3 Virtual Private Network (L3VPN) you have to configure lots and lots of stuff. You have to configure routing instances, route targets, route distinguishers, import and export policies, interfaces, BGP, RSVP, etc. etc. etc. All of these configuration statements are at a really low level of abstraction. Instead of telling the routers what it is you are trying to achieve, you are giving the routers an excruciatingly detailed description of how to achieve it. These configurations can become very complex and large; it is not uncommon to have hundreds or even thousands of configuration statements on each individual router. In fact, at large service providers I have seen routers which multiple hundreds of thousands of lines of configuration. I’m not exaggerating.
Not so with OpenContrail. The north-bound REST APIs provided by OpenContrail expose concepts at a much higher level of abstraction. These are APIs at the service layer instead of the technology layer. You instruct OpenContrail what to do, rather than how to do it.
Let’s give some concrete examples to illustrate this concept. Using combination of OpenContrail and OpenStack REST APIs calls you can do things such as:
- Create virtual networks.
- Create tenant Virtual Machines (VMs) and attach them to virtual networks. Virtual machines connected to the same virtual network can communicate with each other.
- Create policies and apply them at the boundary of two virtual networks. This allows virtual machines on different virtual networks to communicate with each other subject to the rules and constraints expressed in the policy.
- Create service virtual machines, also known as Virtual Network Functions (VNFs), such as for example a virtual firewall. Policies can force traffic to be steered through one or a sequence of service virtual machines. This is referred to as a service chaining.
- Connect virtual networks to physical networks or to bare metal servers through using a gateway router or switch.
The following figure illustrates how the individual “Lego blocks” of virtual networks, virtual machines, policies, and gateways can be combined into some useful assembly.
Figure 1: Service Layer Abstraction
Two important observations on the north-bound REST APIs:
- All of these things can also be done through the Graphical User Interface (GUI) which is simply an application on top of the REST APIs.
- Upcoming release 1.03 OpenContrail will provide REST APIs which are compatible with Amazon Web Services (AWS) Elastic Compute Cloud (EC2) and Virtual Private Cloud (VPC). This makes it easy to migrate workloads back and forth between a private cloud implemented with Contrail and a public cloud with AWS EC2 and VPC compatible APIs.
Now, what does OpenContrail actually do under the hood to implement these service layer abstractions? Most users will say, as Clark Gable famously said in Gone with the Wind: “Frankly, my dear, I don’t give a damn.” This is the beauty of OpenContrail. It allows users to manage their network at a high level of abstraction, using only concepts like virtual networks, virtual machines, policies and gateways. When we say manage, we don’t only mean configuration but also operational state and analytics. Most users neither care nor need to know how things actually work under the hood.
The nice thing about having a high level of abstraction is that it allows users to create virtual networks and to interconnect virtual networks with policies without needing a deep knowledge of networking. This is not just a nice benefit for operators – it is a crucial requirement for allowing cloud tenant to self-manage their own virtual networks.
That said, some small minority of people, e.g. the network operations team, does care and does need to know.
These abstractions can be implemented in multiple ways. For example, historically data centers have used VLANs to implement virtual networks. And in Research and Education (R&E) environments it is popular to use OpenFlow for what is typically called network slicing in those environments.
For various scaling and stability reasons (which are explained in the white paper “Proactive Overlay versus Reactive Hop-by-Hop – Juniper’s Motivations for the Contrail Architecture Explained.”) the industry as a whole is converging on using “proactive overlays” for network virtualization in large scale deployments. There is an excellent tutorial on overlay networking by Ivan Pepelnjak on YouTube. (Gratuitous plug: Ivan also runs the ipspace.net website where you can follow his blog posts and subscribe to all of his truly excellent webinars – at just under $200 per year it is an excellent value and highly recommended.)
In order to implement the service layer abstractions shown in figure 1 above, the OpenContrail SDN controller uses XMPP to communicate with virtual routers (vRouters) and uses BGP to communicate with physical gateway routers and switches. The OpenContrail SDN controller creates all the right routing instances in the right places, creates all the right overlay tunnels in the right places, puts all the right routes in the right forwarding tables, etc. etc. etc. to implement the required service layer abstractions. This is illustrated in the following figure.
Figure 2: Technology Layer Implementation
Even though this example only contains a handful of servers, virtual machines, and gateways, it is already starting to look like a bowl of spaghetti. Just imagine what this diagram would have looked like if we had thousands of servers and tens of thousands of virtual machines. It would be a nightmare to configure that manually.
But that’s exactly what we used to do before automation and SDN. Before SDN introduced logically centralized APIs at a high level of abstraction, we used to configure networks at this low level of abstraction. Now, with OpenContrail, all of this complexity gets automatically created under the hood. The only thing you need to do is to instantiate the service layer abstractions as shown in figure 1.
What’s the magic in OpenContrail that achieves this? How is OpenContrail able to figure out how to translate the high level abstractions into low level operations on the network?
OpenContrail uses a combination of formal data models and a transformation engine to accomplish this. This is illustrated in figure 3 below.
Figure 3: Data Models and Transformation Engine
OpenContrail contains a data model which describes the high level service layer abstractions. This data model contains objects such as virtual networks, virtual machines, and policies. The objects in the service data model can be created, modified, deleted, and queried using north bound REST APIs. In fact, the north bound REST APIs are automatically generated from this data model.
OpenContrail also contains another data model which describes the low level technology implementation details. Here we have objects such as routing instances, route targets, etc.
Between the service data model and the technology data model sits a transformation engine.
The transformation engine is responsible for translating the service data model to the technology data model. When you invoke the north bound REST APIs to instantiate a virtual network object in the service data model, the transformation engine wakes up and figures out “Hmmm…. you way you want a virtual network. That means I need to create these routing instances over here, and those overlay tunnels over there, and I need to put these routes in those routing instances.” The transformation engine then instantiates objects in the technology data model to represent the existence of those low level objects.
At this point, nothing has actually happened yet in the network. The only thing we have done so far is that we have created a more detailed description of the desired state of the network.
The south bound protocols fill this last remaining gap. These south bound protocols listen for changes in the technology data model and are responsible for “making it so” in the network. There are multiple south bound protocols, each responsible for particular subsets of the technology data model. For example, the XMPP south bound protocol is responsible for populating routes in virtual routers whereas the BGP south bound protocol is responsible for populating routes in physical gateway routers and switches.
The above description is somewhat idealized and simplified. In reality we can have a hierarchy of multiple layers of abstractions. Each time something changes in the top layer of abstraction it percolates down the layers until it reaches the bottom of the hierarchy at which point the south bound protocols push it into the network.
OpenContrail uses a publish subscribe (“pubsub”) IF-MAP message bus to choreograph the sequence of events. Changes in the service data model generate events. The transformation engine subscribes to these events and executes transformation rules when these events occur. Those transformation rules make changes in the technology data model, which also generates events. Each south bound protocol subscribes to events for particular subsets of the technology data model. When those events occur, the relevant south bound protocol (e.g. XMPP or BGP) is woken up and it sends a message to the relevant network device to implement the change.
The fact that OpenContrail uses a pubsub message bus is one of the reasons why it can massively scale out. Publishers and subscribers can be distributed across multiple nodes which communicate events with each other using the message bus.
The OpenContrail uses the term “SDN as a Compiler” to describe this architecture. You can think of the service data model as a high level programming language (e.g. Java or Scala). You can think of the technology data model as a low level programming language (e.g. bytecode or assembly). You can think of the transformation engine as a compiler which is responsible for “compiling” the service data model into the technology data model. It’s a very fancy compiler though – it is an event-driven incremental Just In Time (JIT) compiler.
Up until now we have described everything in terms of configuration. However, something similar happens in the reverse direction for operational state and analytics. The south bound protocols are responsible for collecting operational state and analytics events from the network. The transformation engine is responsible for correlating and aggregating these low level states and events into more meaningful information at the service layer.
For example, the Contrail virtual routers (vRouters) generate analytics events for every individual flow in the network. The analytics nodes in the Contrail SDN controller contain collectors which store all of these events in a horizontally scalable distributed database. They also contain a query engine which allow you to ask service layer questions such as “what was the total amount of traffic from virtual network A to virtual network B between 9am and 10am this morning?”
For some use cases it even makes sense for the transformation engine to take not just configuration state as input but also operational state. This creates feedback loops as shown in 3 above.
For example, in a traffic engineering use case we combine the bandwidth demand matrix (high level configuration state), the administrative constraints (high level configuration state), the current topology of the network (high level operational state), and the current amount of traffic on the network (high level operational state) to compute a globally optimal set of paths e.g. LSPs (low level configuration state). Those LSPs are instantiated in the network using a south bound protocol (e.g. PCEP). Other south bound protocols are responsible for collecting the operational state (e.g. BGP-TE for topology discovery and netflow for traffic measurement).
The current use cases implemented in Contrail don’t involve such feedback loops yet. When those uses cases are introduced we will get into some interesting control theory and stability questions. This would be a great area for academic research.
One of key points to take away from all of this is that OpenContrail is not just a point product to solve a particular set of specific use cases such as network virtualization and service chaining. OpenContrail is actually a massively scalable framework for dynamic network management and control.
We actively encourage the open source community to extend OpenContrail for other additional use cases by extending the high level data model with new types of services, by extending the low level data model with new types of technologies, by implementing new south bound protocols to push those new technology objects into the network, and by introducing new rules in the transformation engine.
Here are some pointers into the OpenContrail code which is available in the Juniper/contrail-controller Github repository to get you started.
The src/schema directory contains all the data models, both the high level service data models and the low level technology data models.
OpenContrail currently uses an XML-based data modeling language which is based on IF-MAP. The data models are stored in XML Schema Definition (XSD) files which contain additional annotations in the form of structured comments (referred to as IFMAP-SEMANTICS-IDL). The syntax and the semantics of those structured comments is described at the top of the file vnc_cfg.xsd.
As an example, here is an excerpt from the file vnc_cfg.xsd which defines the virtual-machine object and the virtual-machine-interface object, and the relationship between them.
Figure 4: Example Data Model
An earlier blog post by Pedro Marques titled “Adding a BGP knob to OpenContrail” describes how to add a new element to the data model.
The transformation engine rules are implemented as Python scripts in directory src/config/schema-transformer.
Each of the south bound protocols is stored in its own directory, for example directory src/xmpp for XMPP and directory src/bgp for BGP. The main function for the controller node in file src/control-node/main.cc instantiates the south bound protocols.
Phew! You’ve made it to the end of this very long blog post. Hopefully you’ve learned something about the importance of having the right level of abstraction (namely a high level of abstraction) in the north bound interface provided by the SDN controller. This isolates the applications running on the network from the vendor specific implementation details. Using the concepts of transformation engines and “SDN as a Compiler” is not just elegant; it turns the SDN controller into a resilient, horizontally scalable, general purpose extensible platform for many current and future use cases.