Skip to main content

OpenStack Neutron IPv6 support in OpenContrail SDN

This is a guest blog from tcpCloud, authored by Marek Celoud & Jakub Pavlik (tcp cloud engineers). To see the original post,click here.

As private cloud (primary based on OpenStack) deployers and integrators lots of customer ask as about support of IPv6. Most of our deployments run on OpenContrail SDN&NFV. Reasons are described in our previous blogs (http://www.tcpcloud.eu/en/blog/2015/07/13/opencontrail-sdn-lab-testing-1-tor-switches-ovsdb/) . OpenContrail SDN supports IPv6 for quite long, but there is not so many real tests. Therefore we decided to share procedure how we configured and used IPv6 in OpenStack.

This short blog desribes support of IPv6 in OpenStack using Neutron plugin for SDN/NFV – OpenContrail.

With cloud deployments there is significant growth of need for public IP addresses. These deployments are facing problems due to lack of IPv4 addresses. One of the solutions is to migrate to public IPv6.

We start with capability of IPv6 for internal communication between virtual machines within same virtal network and across different virtual network. Then we show how to expand IPv6 public addresses to external world. In our case we use Juniper MX routers as cloud gateway.

Creating IPv6 network

We need to consider few things when creating IPv6 virtual network. First one is adding also IPv4 subnet, because without IPv4 address instance can not connect to nova metadata api. Cloud images are built to use cloud-init to connect to API on 169.254.169.254:80 address. So if you create network without IPv4 subnet, you will not receive metadata to your instance. Second consideration is whether to you want to go to internet with your IPv6 capable instances. There is currently problem with IPv6 floating IP pool, so if you want to expand to external world, you need to boot to network with associated route target.

We first create private IPv6 network for demonstation.

ipv61

When the network is created we can boot instances. We will boot 2 of them to demonstrate functional communication. You will probably need to modify network interface configuration, because there is not enabled dhcp for IPv6. For nonpreemptive recieve you can use:

#dhclient -6

ipv63

As you can see, you have both IPv4 and IPv6 address associated with interface of instance.

ipv64

Before testing communication, we need to modify security groups to enable traffic. For testing purposes we will enable everything.

ipv62

We choose ubuntu-ipv6-1 from instance list and try to ping instance ubuntu-ipv6-2 with fd00::3 IPv6 address.

ipv65

As you can see, we are now able to ping other instance.

ipv66

This capability is nice, but not very useful without connecting to external world. We will create route with associated route target to expand routes to Juniper MX routers via BGP. In the picture below is sample architecture. There is one VRF CLOUD-INET created on each of MX routers. The route target associated with this VRF matches route target added to virtual network in Contrail. In the picture is demonstrated both IPv4 and IPv6 addresses propagated to same VRF. There is also INET virtual-router, that is connected to VRF via lt tunnel interfaces running ospf and ospf3. From this virtual-router is aggregated default route ::/0 from all internet routes from upstream EBGP.

ipv66 expanded

There are few things to configure on MX routers to enable IPv6 traffic from cloud. First is enabling ipv6 tunneling through mpls tunnels.

protocols {
    mpls {
        ipv6-tunneling;
        interface all;
        }

It is also good practice to filter what routes you export and import to and from cloud. We only need default route present in cloud. And we also want to filter only IPv6 addresses to be imported from Contrail, because of IPv4 pool created with IPv6 virtual network.


policy-statement CLOUD-INET-EXPORT {
    term FROM-MX-IPV6 {
        from {
            protocol ospf3;
            route-filter ::/0 exact;
        }
        then {
            community add CLOUD-INET-EXPORT-COMMUNITY;
            accept;
        }
    }
    term LAST {
        then reject;
    }
}
policy-statement CLOUD-INET-IMPORT {
    term FROM-CONTRAIL-IPV6 {
        from {
            family inet6;
            community CLOUD-INET-IMPORT-COMMUNITY;
            route-filter 2a06:f6c0::/64 orlonger;
        }
        then accept;
    }
    term LAST {
        then reject;
    }
}
community CLOUD-INET-EXPORT-COMMUNITY members target:64513:10;
community CLOUD-INET-IMPORT-COMMUNITY members target:64513:10;

So now we create network 2a06:f6c0::/64 and we associate route target 64513:10 to this network. We can also make it shared so all tenants can boot in this network. Once we create instance to this network, there is already routing information in MX routing table.


# run show route table CLOUD-INET.inet6.0

CLOUD-INET.inet6.0: 8 destinations, 9 routes (8 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

::/0               *[OSPF3/150] 20:37:13, metric 0, tag 0
                    > to fe80::6687:8800:0:2f7 via lt-0/0/0.3
2a06:f6c0::3/128   *[BGP/170] 00:00:15, localpref 100, from 10.0.106.84
                      AS path: ?, validation-state: unverified
                    > via gr-0/0/0.32789, Push 1046
                    [BGP/170] 00:00:15, localpref 100, from 10.0.106.85
                      AS path: ?, validation-state: unverified
                    > via gr-0/0/0.32789, Push 1046

We can also verify that default route is propagated by ispecting routing tables in Contrail.

ipv610

When we verify that instance have public IPv6 address, we can try to access internet.

ipv67

ping google

Conclusion

We proved that OpenContrail SDN solution is fully IPv6 capable with cloud platform OpenStack for private and public communication and communicate directly with edge routers as Juniper MX, Cisco ASR, etc.