Skip to main content

Kube-O-Contrail – get your hands dirty with Kubernetes and OpenContrail

This blog is co-authored by Sanju Abraham and Aniket Daptari from Juniper Networks.

The OpenContrail team participated in the recently concluded KubeCon 2015. It was the inaugural conference for the Kubernetes ecosystem. At the conference, we helped the attendees with a hands-on workshop.

In the past we have demonstrated the integration of OpenContrail with OpenStack, CloudStack, VMware vCenter, IBM Cloud Orchestrator and some other orchestrators. With the growing acceptance of Containers as the compute vehicle of choice to deploy modern applications, the OpenContrail team extended the overlay virtual networking to Containers.

As Kubernetes came along, groups of containers deployed together to implement a logical piece of functionality started being managed together as Pods and multiple Pods as Services.

The OpenContrail team extended the same networking primitives and constructs to the Kubernetes entities like Pods and Services – providing not just security via isolation for Pods, and interconnecting them based on app tier interrelationships specified in the app deployment manifest, but also providing load balancing across the various Pods that implement a particular service behind the service’s “ClusterIP”.

OpenContrail also creates the construct of Virtual Networks for every collection of Pods along with a CIDR block allocated for that Virtual Network. Then, as Pods are spawned, OpenContrail assigns an IP for every new Pod created.

When entities like Webservers need to be accessed from across the internet and need to have a public facing IP address, OpenContrail also provides NATting in a fully distributed fashion.

In summary, OpenContrail provides all the following functionalities in a fully distributed fashion:

IPAM, DHCP, DNS, Load balancing, NAT and Firewalling.

All the above sounds pretty cool and the next thing on anyone’s mind is, “Fine, how do I see these in action for myself?”

In order to reap all the above benefits from OpenContrail, we have committed all the necessary OpenContrail code to Kubernetes mainline – well almost. Our pull request to merge our changes with Kubernetes mainline is open and we anticipate it getting approved within the next few weeks.

What that allows is that whenever any one deploys Kubernetes:

1) On baremetal servers in an on-prem private cloud,
2) On top of OpenStack perhaps using Murano in an on-prem private cloud,
3) On a public cloud like GCE,
4) Or a public cloud like AWS,

All the OpenContrail goodness is right there along with Kubernetes. All that needs to be done to leverage the OpenContrail goodness is to set the value of an environment variable “NETWORK_PROVIDER” to “opencontrail” before Kubernetes is installed.

So let’s go through the steps to first deploy Kubernetes in a public cloud, say GCE, that includes and enables OpenContrail, and then deploy a sample application and see what benefits OpenContrail brings along.

Step 1: Deploying Kubernetes in GCE along with OpenContrail.

In order to do this, we will build Kubernetes and deploy it:

a) git clone -b opencontrail-integration https://github.com/Juniper/kubernetes/kubernetes.git

b) ~/build/release.sh

c) export NETWORK_PROVIDER=opencontrail

d) ./cluster/kube-up.sh

…Starting cluster using provider: gce

Kubernetes cluster is running.  The master is running at:

https://104.197.128.44

The user name and password to use is located in /Users/adaptari/.kube/config.


... calling validate-cluster
Found 3 node(s).
NAME                    LABELS                                         STATUS                     AGE
kube-oc-2-master        kubernetes.io/hostname=kube-oc-2-master        Ready,SchedulingDisabled   1m
kube-oc-2-minion-59ws   kubernetes.io/hostname=kube-oc-2-minion-59ws   Ready                      1m
kube-oc-2-minion-htl8   kubernetes.io/hostname=kube-oc-2-minion-htl8   Ready                      1m
Validate output:
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   nil
scheduler            Healthy   ok                   nil
etcd-0               Healthy   {"health": "true"}   nil
etcd-1               Healthy   {"health": "true"}   nil
Cluster validation succeeded
Done, listing cluster services:

 

Kubernetes master is running at https://104.197.128.44

GLBCDefaultBackend is running at https://104.197.128.44/api/v1/proxy/namespaces/kube-system/services/default-http-backend
Heapster is running at https://104.197.128.44/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://104.197.128.44/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://104.197.128.44/api/v1/proxy/namespaces/kube-system/services/kube-ui
Grafana is running at https://104.197.128.44/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://104.197.128.44/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb

d) To view the Contrail components, you can issue:

docker ps | grep contrail | grep -v pause

Step 2: Now that we have Kubernetes running with OpenContrail, let’s find and prepare an app.

The main forte of OpenContrail lies in abstraction. Abstraction is necessary for the speed and agility that developers care most about. This abstraction is accomplished by letting developers specify app tier inter-relationships in the form of annotations in the app deployment manifests. OpenContrail controller will then infer the policy requirements based on the inter-relationships specified in the app manifests and program corresponding security policies into the vRouters for fully distributed enforcement.

Therefore, the existing app manifests of applications need to be patched with the annotations.

So, patch the existing app, guestbook-go.

https://github.com/Juniper/contrail-kubernetes/blob/vrouter-manifest/cluster/patch_guest_book
The patch above introduces the labels “name” and “uses” that help specify the app tier inter-relationships.
To apply the patch,

git apply –stat patch
git apply –check patch
git apply patch

Step 3: Now that the app is ready, let’s go ahead and deploy it:

kubectl create -f guestbook-go/redis-master-controller.json
kubectl create -f guestbook-go/redis-master-service.json
kubectl create -f guestbook-go/redis-slave-controller.json
kubectl create -f guestbook-go/redis-slave-service.json
kubectl create -f guestbook-go/guestbook-controller.json
kubectl create -f guestbook-go/guestbook-service.json

Notice here that the way Kubernetes was installed or the way apps are deployed has not changed one bit. The only thing that has changed is the introduction of an environment variable and the introduction of annotations in the form of labels – “name” and “uses”.

Finally, to view the replication controllers and service pods created from the above commands, use:

kubectl get rc
kubectl get pods

Step 4: Establish ssh tunnel into the public IP allocated to the guestbook webserver from localhost. Then point browser to http://localhost:3000 (or port used in port forwarding).

This completes the hands-on exercise for OpenContrail with Kubernetes.

In the next part of this blog, we will continue the ride deeper into OpenContrail and look closely at what components OpenContrail has introduced and what benefits those components provide.