Skip to main content

Openstack Bare Metal server integration using Contrail SDN in a multi vendor DC fabric environment

Introduction:

In this blog we are going to show a hands on lab demonstration of how Juniper Contrail SDN controller allows the cloud administrator to seamlessly integrate Bare-metal servers into a virtualized network environment that may consist of existing virtual machine workloads. The accompanying video shows the steps required to configure various features of this lab.

The solution uses standards based OVSDB protocol to program the Ethernet Ports, VLANS and the MAC table entries on the switches and hence any switch that supports OVSDB protocol can be used as the top of the rack switch to implement Contrail BMS solution and we have already demonstrated one such solution with Cumulus Linux Switches in this video.

Specifically, in this blog we are going to use QFX5100 and ARISTA as the two top of rack switches.

Setup:

The figure below shows the physical topology which consists of an all in one Contrail controller and compute node (IP=10.84.30.34) where we are going to spin the VMs. The TSN node has a IP address of 10.84.30.33. The second server (10.84.30.33) is going to act as the TSN node which is going to run the TOR agent (OVSDB clients) for the two TOR switches and also going to provide the network services such as ARP/DHCP/DSN etc. to the bare-metal services through a service known as TOR services node. TSN node vRouter forwarder translates the OVSDB message exchanges done with the TOR switches to XMPP messages which is advertised to the Control node. This is done by vRouter forwarder agent running on the TSN node.

The TWO switches QFX5100 (IP=10.84.30.44) & Arista 7050 (IP=10.84.30.7.38) have one bare-metal servers connected to them on the 1G interfaces as shown in the figure. In the case of Arista, the OVSDB session is established not with the TOR but with the cloudvision VM. Cloudvision in turn is going to push the OVSDB state into the TOR switch using Arista’s cloudvision protocol. 10.84.0.0/16 is DC fabric underlay subnet. 10.84.63.144 is the loopback IP address of QFX5100 and for Arista switch it is 10.84.63.189.

Finally, the MX SDN gateway (10.84.63.133) is also connected to the DC fabric. The MX gateway provides two main functionalities. 1. Public access to VMs and bare-metal servers and 2. required to provide inter-VN connectivity between the bare-metal servers.

Contrail integration with Arista TORs_blog_image1

 

The MX gateway provides two main functionalities:1. Public access to VMs and bare-metal servers and 2. required to provide inter-VN connectivity between the bare-metal servers.

1. Public access to VMs and bare-metal servers and 2. required to provide inter-VN connectivity between the bare-metal servers.

2. required to provide inter-VN connectivity between the bare-metal servers.

Provisioning & control-plane:

The TOR switches can be added to the cluster during provisioning of the cluster itself or can be added to an existing cluster using fab tasks. The TSN and the TOR agent roles are configured under ‘tsn’ and ‘toragent’ roles in the env.roledefs section of testbed.py. All the TOR switch related parameters such as the TOR switch loopback address (VXLAN VTEP source), OVSDB transport type (PSSL vs TCP), OVSDB TCP port etc. is defined under the env.tor_agent section of testbed.py. Please refer the github link provided at in the references section for more details.

Once the cluster is provisioned along with the TOR switches the only other thing that is required is to configure the TOR switches with the required configurations for OVSDB, VXLAN etc. Once all these are configured properly this should result in the control plane protocols being established between the various nodes, TOR switches and the MX gateway. The configurations for the TOR switches and the MX gateway is provided below.

TSN has OVSDB sessions to QFX5100 & the Arista cloudvision VM. TSN also has a XMPP session with the control node. Control and MX gateway exchange BGP routes over the IBGP session.

Data Plane:

As one can see in the video we are going to create two virtual networks RED (IP Subnet 172.16.1.0/24) and BLUE (IP Subnet 172.16.2.0/24). QFX5100 ge-0/0/16 logical interface is added to the RED VN and the Arista switch Et3 interface is added to the BLUE VN. In addition to this on the all in one contrail node we are going to create one VM each for the RED and the BLUE VNs.

The resulting data-plane interactions are as shown in fig 3. The RED and BLUE VTEPs are created on the QFX5100 and Arista 7050 switches by OVSDB intra-VN communication between the bare-metal server and the virtual machine in the RED and BLUE VNs are facilitated by the VXLAN tunnels that are setup between the TOR switches and the compute node (all in one node in this case 10.84.30.34). Broadcast Ethernet frames for protocols such as ARP, DHCP etc., are directed towards the TSN node using VXLAN tunnels established between the TOR switches and the TSN nodes. All the inter-VN traffic originated by the bare-metal server which is destined to another bare-metal server or a VM in a different VN (RED to BLUE or vice versa) is sent to the EVPN instances for the respective VNs on the MX SDN gateway wherein routing takes place between the RED and BLUE VRFs. The VRF and the EVPN (virtual-switch routing-instance) configuration on the MX gateway is automated using the Contrail Device Manager feature.

In addition to the all in one compute node has VXLAN and MPLSoGRE tunnels to the MX gateway for L2 and L3 stretch of the virtual network.

Contrail integration with Arista TORs_blog_image3

 

Contrail integration with Arista TORs_blog_image4

Resulting logical tenant virtual network logical connectivity is as shown below.

Contrail integration with Arista TORs_blog_image5

References:

Sample testbed.py file & the MX GW, QFX5100 , Arista Switch, Cloudvision configurations
https://github.com/vshenoy83/Contrail-BMS-Configs.git