Skip to main content

Load Balancer as a Service In Contrail

By January 6, 2015LBaas, Uncategorized

Note: This blog is co-authored by Aniket Daptari from Juniper Networks Contrail team and Foucault De Bonneval, Product owner of SDN at CloudWatt, France.

Introduction:

“Load Balancing” is a very commonly deployed function in Virtualized Data Centers and Public/Private clouds. A Load Balancer manages incoming traffic by distributing workloads across multiple servers and resources – automatically or on demand. In addition, Load Balancers also take care of detecting unhealthy instances and sending traffic only to the healthy instances. Several vendors today make Load Balancers in both physical and virtualized form factors. Each implementation comes with different feature sets and ways to configure them. However, there is a core set of features that most Load Balancers provide and that most users of Load Balancers use. OpenStack Neutron proposed LBaaS as an advanced service of Neutron, which allows a single set of APIs to be used to leverage Load Balancing functionality provided by a multitude of vendors. In short, this will allow the operators to use a common interface and move seamlessly between different load balancing technologies. This will also alleviate the pain of operators of having to familiarize themselves with the nitty gritties and specifics of different Load Balancer implementations.

LBaaS use in a sovereign Public Cloud:

French Cloud Service Provider, Cloudwatt, used OpenContrail to deploy a scalable sovereign Public Cloud in France. Cloudwatt delivered a live presentation during the recently concluded OpenStack Summit in Paris. Cloudwatt has designs that leverage the LBaaS functionality in particular and talk about it during their presentation. Please watch them talk about it here: Scalable SDN in Public Cloud.

In short, the goal behind Cloudwatt’s plans to use LBaaS is to make it seamless for Cloudwatt customers to use load balancing without having to manually configure the involved components. This drastically simplifies the customer’s life, as they no longer need to know the minutiae of configuring Load balancers. Complex designs can be implemented with “internal-only” or “hybrid internal-external” LBaaS.

Understand: Customers do not need to know the minutiae,
Configure: Manual configuration is error-prone. LBaaS eliminates tedious and combersome manual configuration of Keepalived, HAProxy
Operate: By virtue of being API driven, Load Balancer deployment and operation is programmatic and automated. Similarly, responding to failures either in the infrastructure or in the application becomes similarly automated.

OpenContrail implementation:

The release 1.20 LBaaS implementation in OpenContrail supports the following:

  1. Full proxy L7 loadbalancing of HTTP/HTTPS/TCP traffic to a pool of backend servers.
  2. Provide Health monitoring of the pool members using HTTP, TCP and PING.
  3. Association of Floating IP to the virtual-IP.
  4. Resiliency is integrated natively with active/passive failover mode.

In the release 1.20 of OpenContrail, support for using HA Proxy as a backend to the LBaaS APIs will be made available. In future releases, other backends will be added.

The OpenContrail plugin will support the Neutron LBaaS API and will create the relevant virtual-IP, pool, members and health-monitor objects. When a pool is associated with a virtual-IP, the OpenContrail plugin will go on to create a service instance. The service scheduler will then instantiate a Linux network namespace on a randomly selected compute-node and spawns the active HA Proxy in the namespace. The service scheduler will then similarly instantiate a namespace on another (different) compute node to spawn the standby HA Proxy instance in the namespace. The properties of the Load Balancer object are passed to HA Proxy as command line parameters. Each VIP-Pool pair will result in two instances of HA Proxy being spawned as active-standby pair in two separate namespaces on two different compute-nodes. This is how High Availability of the Load Balancer is supported. Both active and standby instances are identically configured. Switching mechanism is based on routing priority managed inside the overlay.

In our implementation, the Load Balancer will proxy all connections to the VIPs thus functioning as a full-proxying Layer7 Load Balancer (as opposed to a L4 Load Balancer). A full-proxy treats the client and server as separate entities by implementing dual network stacks. In L7 load balancing, specific information within the requests can be used to balance the requests to the appropriate destination server end-point.

Stay tuned to this space to watch for new capabilities in the future Contrail releases in this area.

SSL Termination:

Modern Load Balancers now also offer SSL Termination. To understand SSL termination, let’s take the case of a virtualized web application hosted in a Contrail managed cluster. A load balancer that manages the traffic to this web application hosts the VIP for this application. Clients initiate HTTPS connections to the VIP. Since the VIP is hosted on the Load balancer, the load balancer has the option to terminate the SSL connection, and initiate an HTTP connection to the web servers running in the load balancer pool. Terminating the SSL connection at the Load balancer allows for centralized certificate management, offloading SSL from the web server to the load balancer, allows HTTP traffic to be inspected by the DPI engines, alleviates the load on the web servers allowing webservers to focus on serving the HTTP requests. For SSL termination, SSL certificates need to be installed on all compute nodes and HA Proxy needs to be made aware of the location of the certificates.

Typical Workflow:

The typical workflow is as follows:

  1. Create a pool, empty at first
  2. Add members to the pool
  3. Create a health monitor
  4. Associate health monitor with the pool
  5. Create Load Balancer object
  6. Listener (Unsupported)
  7. Add TLS certificate and key (Optional)
  8. Create a VIP
  9. Associate pool with VIP

Example:

Figure 1:

LBaaS_Contrail_Image1

Figure 2:

LBaaS_Contrail_Image2

In the Figure 1 above, there is a cluster of compute nodes being managed by the Contrail controller. The Load balancer and the virtual-machines housing the application instances are all running on compute nodes on such a cluster being managed by the Contrail controller.

At the very right is a pool whose members are instances of an application. Traffic to this application needs to be managed by distributing and balancing the workload across the various instances of the application. The pool members have endpoint IP addresses belonging to the Pool subnet. The pool subnet is behind an active-standby pair of Load Balancer instances.

The Load Balancer is instantiated with a Virtual IP (VIP) of 20.1.1.1. Application virtual-machines are associated with a Pool Subnet 30.1.1.0/24 and individual application virtual-machines obtain IP addresses from that subnet. When a client sends a request to the application directing traffic to the virtual-IP, the Load Balancer proxies the TCP connection on its virtual-IP. The Load Balancer terminates the incoming connection from the client and initiates a new one with one of the members of the pool. The member is picked based on one of following schemes as configured by the administrator:

Round Robin: Each pool-member is used in turns, according to their weights. Weights may be adjusted on the fly making this a dynamic scheme.
Least connection: Pool-member with the lowest number of connections receives the new connection. Well suited for protocols with long lasting sessions.
Source IP: The source IP address is hashed and divided by the total weight of the running servers to select the pool-member to serve the new request.

Further, the Load Balancer is responsible for ensuring that only healthy application instances are part of the pool. To this end, the Load Balancer monitors the health of the pool members via one of the following probe schemes:

TCP: Load Balancer initiates TCP connections to the pool members for health checks.
HTTP: Load Balancer initiates HTTP requests after establishing TCP connection with the pool members.
PING: Load Balancer will use ICMP requests to the pool members for health checks.

To instantiate the above picture, the relevant LBaaS API have to be invoked in the appropriate sequence.

Watch the video demonstration of the LBaaS functionality.

1 Create Load Balancer

Create VIP network
neutron net-create vipnet
neutron subnet-create –-name vipsubnet vipnet 20.1.1.0/24

Create pool network
neutron net-create poolnet
neutron subnet-create --name poolsubnet poolnet 10.1.1.0/24

Create a pool for HTTP
neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP 
--subnet-id poolsubnet

Add members to the pool
neutron lb-member-create --address 10.1.1.2 --protocol-port 80 mypool
neutron lb-member-create --address 10.1.1.3 --protocol-port 80 mypool

Create VIP for HTTP and associate to pool
neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP
--subnet-id vipsubnet mypool

Associate the VIP to a floating IP
neutron floatingip-create Public
neutron floatingip-associate 66faf8de-54c5-4f52-8b65-84e5752653a3 a3527b7c-89c0-4f92-9315-2bd9ca5bcd32

 

2 Delete Load Balancer

Delete vip
neutron lb-vip-delete <vip-uuid> 

Delete members from the pool
neutron lb-member-delete <member-uuid>

Delete pool
neutron lb-pool-delete <pool-uuid>

3 Associate and disassociate healthmonitor

Create healthmonitor
neutron lb-healthmonitor-create --delay 20 --timeout 10 --max-retries 3
--type HTTP

Associate healthmonitor
neutron lb-healthmonitor-associate <healthmonitor-uuid> mypool

Disassociate healthmonitor
neutron lb-healthmonitor-disassociate <healthmonitor-uuid> mypool

Delete healthmonitor
neutron lb-healthmonitor-delete <healthmonitor-uuid>

 

4 Configure SSL VIP with HTTP backend pool

Copy certificate to all compute nodes
scp ssl_certificate.pem <compute-node-ip> <certificate-path> 

Update /etc/contrail/contrail-vrouter-agent.conf
# SSL certificate path haproxy
haproxy_ssl_cert_path=<certificate-path>

Restart contrail-vrouter-agent
service contrail-vrouter-agent restart

Create VIP for port 443 (SSL)
neutron lb-vip-create --name myvip --protocol-port 443 --protocol HTTP --subnet-id vipsubnet mypool

Note:

Compute Nodes where the HAProxy instances will be spawned are chosen at random. It is therefore necessary for all compute nodes to have the HAProxy binaries. Juniper Contrail’s provisioning scripts will take care of installing the HAProxy binaries in all the compute nodes in question.

References:

http://www.haproxy.org/
https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary
http://www.businesscloudnews.com/2014/07/29/cloudwatt-deploys-open-source-sdn-controller/

Appendix:

Appendix A: Service Template for LB (HAProxy) Service Template
LBaaS_Contrail_Image3

Appendix B: Details of LB (HAProxy) Service Instance

http://<Controller-IP>:8081/analytics/uves/service-instance/default-domain:demo:769c9864-3745-493d-a3f0-2790d9e585b6?flat

{
 UveSvcInstanceConfig: 
{
 status: "CREATE",
 vm_list: 
[
 
{
 ha: "active: 200",
 uuid: "6cffa4c6-1b3a-45be-b9af-5dbe23d52b9f",
 vr_name: "compute-node-2"
 },
{
 ha: "standby: 100",
 uuid: "4bf23df5-cf91-4710-bd9a-0823707fb616",
 vr_name: "compute-node-3"
 }
 ],
 create_ts: 1413336206921338,
 st_name: "default-domain:haproxy-loadbalancer-template"
 }
}

The UUID listed under “vm_list” is the UUID associated with the net-namespace.

If you issue “ip netns list”, you will see a combination of the UUIDs associated with the service instance and the namespace:

root@single-node-253:~# ip netns list
vrouter-6cffa4c6-1b3a-45be-b9af-5dbe23d52b9f:769c9864-3745-493d-a3f0-2790d9e585b6

Appendix C: Generated HA Proxy Config:

“/var/lib/contrail/loadbalancer/769c9864-3745-493d-a3f0-2790d9e585b6/etc/haproxy/haproxy.cfg”

global
   daemon
   user nobody
   group nogroup
defaults
   log global
   retries 3
   option redispatch
   timeout connect 5000
   timeout client 50000
   timeout server 50000
listen contrail-config-stats :5937
   mode http
   stats enable
   stats uri /
   stats auth haproxy:contrail123
frontend 10f8c207-065a-4a65-90bb-6d482d681709
   bind 20.1.1.2:80
   mode http
   default_backend 769c9864-3745-493d-a3f0-2790d9e585b6
backend 769c9864-3745-493d-a3f0-2790d9e585b6
   mode http
   balance roundrobin
   server 29aebe4e-89c6-4924-927f-49003f3796b9 10.1.1.3:80 weight 1
   server 7f450ab8-2669-4d62-a881-d4712d8713a2 10.1.1.2:80 weight 1