Skip to main content

Installing Kubernetes & Opencontrail

In this post we walk through the steps required to install a 2 node cluster running Kubernetes that uses OpenContrail as the network provider. In addition to the 2 compute nodes, we use a master and a gateway node. The master runs both the kubernetes api server and scheduler as well as the opencontrail configuration management and control plane.

OpenContrail implements an overlay network using standards based network protocols:

This means that, in production environments, it is possible to use existing network appliances from multiple vendors that can serve as the gateway between the un-encapsulated network (a.k.a. underlay) and the network overlay. However for the purposes of a test cluster we will use an extra node (the gateway) whose job is to provide access between the underlay and overlay networks.

For this exercise, I decided to use my MacBookPro which has 16G of RAM. However all the tools used are supported on Linux also; it should be relativly simple to reproduce the same steps on a Linux machine or on a cloud such as AWS or GCE.

The first step in the process is to obtain binaries for kubernetes release-1.1.1. I then unpacked the tar file into the ~/tmp and then extracted the linux binaries required to run the cluster using the command:

cd ~/tmp;tar zxvf kubernetes/server/kubernetes-server-linux-amd64.tar.gz

In order to create the 4 virtual-machines required for this scenario I used virtual-box and vagrant. Both are trivial to install on OSX.

In order to provision the virtual-machines we use ansible. Ansible can be installed via “pip install ansible”. I then created a default ansible.cfg that enables the pipelining option and disables ssh connection sharing. The later was required to work around failures on tasks that use “delegate_to” and run concurrently (i.e. run_once is false). From a cursory internet search, it appears that the openssh server that ships with ubuntu 14.04 has a concurrency issue when handling multi-session.


~/.ansible.cfg
[default]
pipelining=True
 
[ssh_connection]
ssh_args = -o ControlMaster=no -o ControlPersist=60s

With ansible and vagrant installed, we can proceed to create the VMs used by this testbed. The vagrant configuration for this example is available in github. The servers.yaml file lists the names and resource requirements for the 4 VMs. Please note that if you are adjusting this example to run in a different vagrant provider the Vagrantfile needs to be edited to specify the resource requirements for that provider.
After checking out this directory (or copying over the files) the VMs can be created by executing the command:

vagrant up

Vagrant will automatically execute

config.yaml

which will configure the hostname on the VMs.

The Vagranfile used int this example will cause vagrant to create VMs with 2 interfaces: a NAT interface (eth0) used for by the ssh management sessions and external access and a private network
interface (eth1) providing a private network between the host and the VMs. OpenContrail will use the private network interface; the management interface is optional and may not exist in other
configurations (e.g. AWS, GCE).

After vagrant up completes, it is useful to add entries to /etc/hosts on all the VMs so that names can be resolved. For this purpose i used another ansible script invoked as:

ansible-playbook -u vagrant -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory resolution.yaml

This step must be executed independently of the ansible configuration performed by vagrant since vagrant invokes ansible for each VM at a time, while this playbook expects to be invoked for all hosts.

The command above dependens on the inventory file that vagrant creates automatically when configuring the VMs. We will use the contents of this inventory file in order to provision kubernetes and OpenContrail also.

With the VMs running, we need to checkout the ansible playbooks that configure kubernetes + opencontrail. While an earlier version of the playbook is available upstream in the kubernetes contrib repository, the most recent version of the playbook is in a development branch on a fork of that repository. Checkout the repository via:

git clone https://github.com/pedro-r-marques/contrib/tree/opencontrail

The branch HEAD commit id, at the time of this post, is 15ddfd5.

I will work to upstream the updated opencontrail playbook to both the kubernetes and openshift provisioning repositories as soon as possible.

With the ansible playbook available on the contrib/ansible directory it is necessary to edit the file ansible/group_vars/all.yml replace the network provider:

# Network implementation (flannel|opencontrail)
networking: opencontrail

We then need to create an inventory file:

[opencontrail:children]
masters
nodes
gateways
 
[opencontrail:vars]
localBuildOutput=/Users/roque/src/golang/src/k8s.io/kubernetes/_output/dockerized
opencontrail_public_subnet=100.64.0.0/16
opencontrail_interface=eth1
 
[masters]
k8s-master ansible_ssh_user=vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=/Users/roque/k8s-provision/.vagrant/machines/k8s-master/virtualbox/private_key
 
[etcd]
k8s-master ansible_ssh_user=vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=/Users/roque/k8s-provision/.vagrant/machines/k8s-master/virtualbox/private_key
 
[gateways]
k8s-gateway ansible_ssh_user=vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=/Users/roque/k8s-provision/.vagrant/machines/k8s-gateway/virtualbox/private_key
 
[nodes]
k8s-node-01 ansible_ssh_user=vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201 ansible_ssh_private_key_file=/Users/roque/k8s-provision/.vagrant/machines/k8s-node-01/virtualbox/private_key
k8s-node-02 ansible_ssh_user=vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=

This inventory file does the following:

  • Declares that hosts for the roles: masters, gateways, etcd, nodes;The ssh information is derived from the inventory created by vagrant.
  • Declares the location of the kubernetes binaries downloaded from the github release;
  • Defines the IP address prefix used for ‘External IPs’ by kubernetes services that require external access;
  • Instructs opencontrail to use the private network interface (eth1); without this setting the opencontrail playbook defaults to eth0.

Once this file is created, we can execute the ansible playbook by running the script"setup.sh" in the contrib/ansible directory.

This script will run through all the steps required to provision kubernetes and opencontrail; it is not unusual for the script to fail to perform some of network based operations (downloading the repository keys for docker for instance or downloading a file from github); the ansible playbook is ment to be declarative (i.e. define the end state of the system) and it is supposed to be re-run if a network based failure is encountered.

At the end of the script we should be able to login to the master via the command “vagrant ssh k8s-master” and observe the following:

  • kubectl get nodes
    This should show two nodes: k8s-node-01 and k8s-node-02.
  • kubectl --namespace=kube-system get podsThis command should show that the kube-dns pod is running; if this pod is in a restart loop that usually means that the kube2sky container is not able to reach the kube-apiserver.
  • curl http://localhost:8082/virtual-networks | python -m json.toolThis should display a list of virtual-networks created in the opencontrail api
  • netstat -nt | grep 5269
    We expect 3 established TCP sessions for the control channel (xmpp) between the master and the nodes/gateway.

On the host (OSX) one should be able to access the diagnostic web interface of the vrouter agent running on the compute nodes:

These commands show display the information regarding the interfaces attached to each pod.

Once the cluster is operational, one can start an example application such as “guestbook-go”. This example can be found in the kubernetes examples directory. In order for it to run successfully the following modifications are necessary:

    • Edit guestbook-controller.json, in order to add the labels “name” and “uses” as in:

"spec":{
  [...]
  "template":{
    "metadata":{
      "labels":{
        "app":"guestbook",
        "name":"guestbook",
        "uses":"redis"
      }
    },
  [...]
}
    • Edit redis-master-service.json and redis-slave-service.json in order to add a service name. The following is the configuration for the master:
"metadata": {
  [...]
  "labels" {
         "app":"redis",
         "role": "master",
         "name":"redis"
  }
}
  • Edit redis-master-controller.json and redis-slave-controller.json in order to add the “name” label to the pods. As in:
    "spec":{
       [...]
       "template":{
          "metadata":{
             "labels":{
                "app":"redis",
                "role":"master",
                "name":"redis"
             }
          },
       [...]
     }

After the example is started the guestbook service will be allocated an ExternalIP on the external subnet (e.g. 100.64.255.252).

In order to access the external IP network from the host one needs to add a route to 192.168.1.254 (the gateway address). Once that is done you should be able to access the application via a web browser via http://100.64.255.252:3000.