Skip to main content

Mixi’s Review of OpenContrail

By December 17, 2013Uncategorized

Mixi is Japanese SNS provider. Recently, Junpei Yoshino, an applications operations engineer at Mixi, wrote a blog about his thoughts on OpenContrail. The direct blog is posted at, but we’ve translated to English and posted with permission below.

I have been evaluating virtualized network in service providing environment and during the course strongly felt that I should study OpenContrail, so I would like to record my thought processes here.  In 3 lines, I can describe OpenContrail as follows:

  • OpenContrail is a good alternative when you think of realistic gateway implementation as of today.
  • Openvswitch emulates L2 switch while OpenContrail emulates router.
  • Things that openvswitch is not good at, such as hiding mac address or mss adjustment, are taken care of by OpenContrail.

I understand there will be some discussions on OpenContrail in the OpenStack Days Tokyo tonight, so I decided to write this article along with the forum.


I have many challenges in the current Mixi network, however, the two biggest challenges are below:

  • Generic concerns operating a large L2 network
  • Since the network is designed assuming we operate only one service, we cannot do tenant partitioning
Issues with the existing implementation
  • If the number of our subsidiaries or services increase, it is not easy to provide additional services (I don’t want to increase number of vlans anymore).
  • When number of mac or arp tables increase as a result of virtualization, options for equipment selection become limited.

 Resolution Ideas

I compared two edge overlay methods this time.  I expect lead time to start services, and cost to procure equipment, to be reduced by edge overlay.  For the same reason, I am focusing on the methods that do not to rely on multicast.  Openvswitch emulates L2 switch and OpenContrail emulates router.  Now, I would like to raise 3 points that I think are important in order to compare the two methods, and provide comparisons in detail.

  • Implementation example using openvswitch
    • Arp packet processing: Controller looks up the address arp would like to resolve, create flow and forward the packet
    • IP packet processing: Controller looks up destination address in IP, create flow and forward the packet.
    • Tenant partitioning method: Partitioning using tunnel_id configured by openflow, etc.
  • Implementation example using OpenContrail
    • Arp packet processing: vRouter responds to arp
    • IP packet processing: vRouter looks up vrf routing table and forwards the packet (/32 route is included)
    • Tenant partitioning method:  Partitioning of routing table by MPLS
Point #1: Gateway implementation

I would like to think about how virtualized network can talk to legacy network.

Think about N+1 redundancy of gateway operation, and behavior or ease of maintenance in such environment. I believe it is better that gateway is connected using L3.

I evaluated implementations below and my conclusion is that I like OpenContrail for following reasons:

  • When I use openflow switch and use L2 to connect to legacy network, I need to care about flooding.
  • I don’t know any network equipment that has sufficient flow table size assuming we use openflow switch, while it is capable of talking L3 protocol without external aid.
  • Opencontrail uses router for gateway and therefore it is easy to use traditional ospf max metric for traffic re-routing
Sample implementation of L2 gateway using openvswitch + tunnel
  • Connect openvswitch port to L3 switch
  • If unknown unicast flooding occurs in the redundant configuration, multiple packets flow to the virtual network and that needs to be taken into consideration.
    • Eg) controller determines which openvswitch passes the flooding packets
    • Eg) configure L3 switch to forward unknown unicast flooding to some designated port only
  • Once mac table on L3 switch is learned, packets are forwarded using the L2 gateway and therefore it may be easier to scale
Sameple gateway implementation running L3 protocol using openvswitch + tunnel
  • Intake using routing protocol
  • Forward to openvswitch within the same host
  • Forward using the flow created by controller
  • I don’t know appropriate equipment to do this
  • Assuming we do this using servers, I would be sensitive about the forwarding capacity
    • I don’t want to live a life constantly watching the load and distribute the load to more-specific (in a better way) route
    • On the other hand, I also feel use of 10G NIC may be a workaround


  • Use of equipment that can do MPLS over GRE as gateway router
    • Many to choose from, from Juniper, Cisco, Alaxala, etc.
    • I expect major carriers make this mature to some extent
  • Controller acts as route reflector
    • Gateway router and vRouter both acts like PE router
    • BGP is used to connect to gateway router
    • Equivalent features are implemented on xmpp between vRouter
  • Additional features except for tunneling are implemented on edge router
    • There is no new features added to gateway routers, so it appears that equivalent implementation can be achieved using other manufacturers equipment
Point #2: mss adjustment

Since tunneling is used, I need to take into consideration where to do mss adjustment.  For this issue as well, I believe use of OpenContrail has advantage for use in MTU1500 environment.

  • OVS:  You need to implement somewhere by yourself
  • OpenContrail:  vrouter adjusts mss looking at lower MTU
Point #3: Can we use it small (without big tasks)

I think it is important that it can be used sheer and small (without big tasks and investments).

Openvswitch and controller are separate products.  From users point of view, they are easy to use.

OpenContrail requires python which uses ucs2 in order to build.   Virtualenv is needed in order to do build smoothly but there is no documents around that.  Other challenge is that it expects the use through Openstack orchestrator or its equivalent and therefore it is very challenging to find some ways to use it simply by CLI only.


I don’t know yet to what extent OpenContrail implementation can be used in the real world, however, I feel the framework of vrf + proxy arp to be very advanced.  At this time, use of OpenContrail is challenging in resolving dependencies and therefore you cannot try so instantly.  For this we can resolve by us contributing in the OpenContrail.  mac address is closed in the vRouter, so when you design pseudo L2 connection, you don’t need to consider mac duplication and that is a benefit.

When you use routes dynamically created by controllers on openvswitch, when you encounter problems, resolving the problem is too difficult.  I can envision difficult operation by inevitably seeing dynamically created flows.  On the other hand, it has much larger advantage in ease of setting up environment or trial configuration.

I like OpenContrail better.  I can sense some large system with dreams and hopes in OpenContrail so I would feel extremely happier if equivalent features can be realized in smaller implementation.