How to install a 2600hz VOIP solution with working IPv6. — Kazoo in the cloud.

My intention is to develop the system in correct IPv6, this will include the following steps.
If done correctly according to these instruction everything should be free, you may setup your own AWS testing server. You can then use it to test your domain and IPv6 connectivity and the Kazoo API accounting and loading, all using IPV6 Dual stack compatibility.

  > Installing Kazoo cluster using chef solo in the cloud with IPv6.
        — Sign up with a Domain Name Register, and create a domain for your intended website.
        — Use or other Tunnelbroker account to create IPv6 access in nodes.
           – Create a Account and setup for static tunnel within on of your nodes.
        — Use to create a DNS NS that is IPv6 compatible.
           – Add all domain name NS records pointing at ns3, ns4, ns5.
        — Use to create a google apps account. Or your choice of email server.
           – Sign up for your free trial business level google apps account.
           – Create your ten users within google apps account(s)/twenty total.
        — Use Amazon’s AWS to provision server nodes within our cluster.
           – With your ten free emails all pointing to your domain begin signing up AWS.

   > Configuring Two clusters using distributive provisioning.
        — Setup the First Cluster.
           – Configuring each node type and creating an AWS snapshot for replication.
           – Using seven or more nodes per-cluster.
           – Using a single VPN VM switch to control all network traffic across IPv6.

Note: I am going to assume some things in this tutorial and explain them here.
When I talk about VM’s or API, I am referring to the Amazon AWS Virtual Machine Hosting service, and 2600hz Kazoo API Platform(s), accordingly.
I would think that if you are using this tutorial that you know something about programming and especially Linux systems (Bash shell).

Okay, lets get started. 

First, Go to Amazon’s AWS signup go through the signup process, its free for a year. Now a lot of information on this is sketchy but to truly get it free for a year you must signup and enter credit card information. The amazon webservice is truly free of charge within the data limit and bandwidth limit. The limits I found are hearty enough that it remains free for a year if you stick to say a IO load of what it would take to compile, that said take care to watch the cost of your VM upkeep. The AWS should not charge unless in production. I am sure that if there is a accounting error then the AWS Customer service will be apply to reimburse you. Examples of accounting errors include attempted image of feature deletion and having to wait too long, if you explain over charging due to technical errors things should be fine.

After signing up for AWS micro instance install CentOS 6.3. I recommend using a particular configuration set:
AMI >> ami-07b73c6e
NAMED >> DynaCenter-CentOS-6.3-x64

When setting up the first AWS machine name it
When setting up AWS machine be sure to open ICMP ports ALL, SSH port 22, HTTP ports

When the system first read the instructions in the Kazoo install via chef solo method.
note: don’t run then just yet, just familiarize yourself.

You can find the chef solo Kazoo API instruction in the 2600hz >>

note: don’t run then just yet, just familiarize yourself.

>> In addition to those Kazoo API Instructions listed above you should use the instructions found at, and the instructions for deploying IPv6 below.

Next we find out what it is that we do when we connect these differing IPv6, what I recommend after setting up initialized IPv6 account. Go to and signup for a free account. Also sign up for a free DNS service account at Also since we are working with DNS you need to get a Domain name , so if you don’t have a spare one register one with or other domain name registrar service.

      I like to keep open in one tab and in another.
      For setup of your tunnel to point to the IPaddress of your AWS elastic IP.
      Note: make sure the elastic IP is ICMP pingable (check security group).

>> In the DNS service use the AWS elastic IPaddress listed in the EC2 instance panel.
      Then in DNS service point that elastic IP found above in the AWS panel.
      Create a new A record and point that to >> IPv4 Address (AWS elastic IP).

>> In the website.
      Login and create a new tunnel pointing to your ipv4 address (AWS elastic IP).

>> In CentOS 6 chefsolotest1 node change the file /etc/sysctrl by issuing this command at the bash prompt
      # echo -n ‘net.ipv6.conf.all.forwarding = 1’ >>/etc/sysctrl.conf

— Check that this indeed did work by using cat /etc/sysctrl |grep net.ipv6.conf.all.forwarding=1

>> In CentOS 6 I recommend the setup in the /etc/sysconfig/network-scripts/ifcfg-sit1.

      IPV4B=$(ifconfig | awk -F’:’ ‘/inet addr/&&!/{split($2,_,” “);print _[1]}’)
      #SERVER SIDE IPV6 ADDR OF HENET (replace xxxx with your subnet)
      #This will activate IPv6 on the interface
      NAME=”Hurricane Electric SIT”

      #Enable IPV6





NEXT we need code for the routing table for out new IPv6 interface, create the following file on the chef solo switching server AND the apps01 and all the other nodes: /etc/sysconfig/network-scripts/route6-sit1 file with the code:
note: replacing again the endings xxxx:xxxx in this example with your IPv6 address.

      ::/0 via 2001:470:xxxx:xxxx::1 dev sit1

Note: you may have to restart after this is in effect in order to use IPv6.

This above code will be using this several times over. For instance in out chefsolotest1 host we will have one connection on the public facing with a Tunnel link to the public DNS servers. Furthermore, there will be a outgoing link to each of our nodes from within the two clusters all on the chefsolotest1 switching server. Also remember one on each node like the sit1 interface.

Note: the is not strictly speaking just a node it is a switching server. So, I would like to make that distinction now for past and future references too^^. You see the difference in that it does routing to the client typical nodes, and that the other nodes are filtered and configured with chef solo itself.

The installation and configuration of these interfaces is non-trivial and you can follow the information found in this walkthrough.

 Please follow the walkthrough and the above code examples for setting up your 6in4 tunnel pointing to the public elastic interface of  Configure one identical interface like the “sit” mentioned above for each and every node. Make sure to rename the file to something like from ifcfg-sit1 to ifcfg-node1, and ifcfg-node2 etc. . . down the line until there is a almost identical interface for each node present in our cluster. Remember to find the interface name and rename it inside the renamed ifcfg-file as well.

Finally, inside each renamed ifcfg-node# file enter your  /48 routed IPv6 addresses I named mine from the given /48 2001:480:xxxx:xxxx:: to a subnet  /64 like the following;

Note: how I have a new /64 subnet for each node in the cluster?

on chefsolotest1 tunnel to
           >> sit1=2001:480:xxxx:1234::2 with a route to
                         > Where Henet server is: 2001:480:xxxx:1234::1

Note: that there is a pattern here of server ::1 and client ::2? So lets break this apart also using this for our /48 into /64 subnets for our nodes.

Switching Server
With out /46 broken into /64 subnets below.

          >> on chefsolotest1 interface name node#1 or apps01 for keeping out kazoo theme here would look like 2001:480:xxxx:1111::1.

Where the made up subnet of 2001:xxxx:1111:: is used for node#1 apps01

However, we will change this down the line for apps02, fs01, fs02, db01, and db02 accordingly, 2001:480:xxxx:2222::1 for the ifcfg-apps02 interface

Remember, these ::1 are found on the switching server only.

Note: just remember the differences between the switching server interfaces: apps01 db01 and fs01 nodes.

It is simply that while the ifcfg-apps01 file name on the chefsolotest1 switching server is always going to have the /64 subnet address of ::1

On the client end of the apps01 , etc. nodes the sit1 interface connecting us to the IPv6 network is going to have a /64 subnet ending in ::2, this is allays the case for the nodes.

If you have to have a walkthrough for the setup of this please follow the general guidelines and emulate the walkthrough below.

I found this to be a good decent walkthrough and very accurately follow what we are doing.

 >> Walkthrough for IPv6:

Note you can add other nodes if you want like I have 4 database nodes DB01 DB02 DB03 and DB04.

Also You will need to add DNS records for your cluster.

For Henet, please create a new domain at the register of your choice (godaddy) then point the records to NS servers and accordingly.

Furthermore, in the actual DNS management panel you will need to create a domain, then add NS records to it namely SOA record and NS records that forward to NS servers. create NS records for your domain and point them to the dual stack servers starting with ( is IPv4 only don’t use it)

After the NS records you create show up in dig your ready to start adding the A record of your nodes from the elastic AWS and the AAAA (ipv6) records from the “routed” /64 (don’t add the tunnel /64 to chefsolotest1) just manually enter all 9 of the split up /64 subnet addresses for example: 2001:480:xxxx:1111::1 and another  2001:480:xxxx:2222::1 for

As for the chefsolotest1 A and AAAA records thats it now manually add the records for the nodes. example A record:, where the IPv4 address is your elastic AWS IPaddress publicly facing (not the internal address seen by the command ifconfig) example AAAA record: 2001:480:xxxx::2

Now, just follow the same instructions until all your nodes are completely allocated with ifcfg-sit interfaces and route6-sit1 routing table file, and on the website the A and AAAA records pointing to interfaces accordingly.

Furthermore, on topic of using and configuring the now newly publicly available DNS entrees and AWS you can and should* create in AWS what is called a Load Balancer. I created on on only one node the should work presently fine. Then have that publicly facing load balancer with a dualstack.chefsolotest1.your_amazon_domain_name with a friendly CNAME in service pointing to that allocated amazon Load Balanced dual stack. We are going to do this so that all traffic goes through that network smoothly and a little more securely through the chefsolotest1 switch.


With the IPV6 network connectivity in place you *should be able to ping your internet from the, if not go back you missed something.

With chefsolotest1 switching_server up and running we should* furthermore be able to ping that apps01 we have been talking about, not to mention everything else db01 fs01.

However, you may noticed that your nodes are not reachable except via the direct ssh into IPv4 via elastic AWS IPaddress. Well, that is because we have put in NA A and AAAA records for them so ssh into apps01.yourdomain will do nothing until IPv4 gets a timeout. This can not be, we have a issue right?

Well, if you figured that out then you can figure out that although access is limited to out public face IPv4. Let us get in, via IPv4 >> try now to ping6 any_ipv6_address , nothing right? we cant even ping our gateway at the

WHY? This happens Because in AWS we are behind a NAT and it is acting like a “statefull firewall“. So garbage in garbage out theory helps keep traffic to a minimum right, well in theory, but not our IPv6. SO, what we do..

What to do: we can eliminate problem related to the interfaces.. this is because if I ping the node* from* the chefsolotest1 switching server everything is up. well that’s good we just keep pinging right? nope we use NTP time synchronization services.

so edit your /etc/ntp.conf located in the switching server.


driftfile /var/lib/ntp/ntp.drift
statsdir /var/log/ntpstats/
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
  server 2001:470:xxxx:1111:2
  server 2001:470:xxxx:2222::2
  server 2001:470:xxxx:3333::2
  server 2001:470:xxxx:4444::2
  server 2001:470:xxxx:5555::2
  server 2001:470:xxxx:6666::2
  server 2001:470:xxxx:7777::2
  server 2001:470:xxxx:8888::2
  server 2001:470:xxxx:9999::2
filegen clockstats file clockstats type day enable
  server 2001:1291:2::b:
  server 2001:470:0:50::2:
  server 2001:648:2ffc:1106::2
restrict default kod notrap nomodify nopeer noquery
restrict nomodify

Once those instructions found on the are followed, (making sure to use the cluster method).

Note in the below configuration try adding just the repeat fully qualified domain name in the chef configuration. (this is untested)
 You can find the chef solo Kazoo API instruction in the 2600hz >>

Then afterwards there is the configuration of our database.
You can find the database (bigcouch) instruction in the 2600hz >>

Note Again in the above configuration try adding just the repeat fully qualified domain name in the chef configuration. (this is untested)

You should now have a fully configured and functionally working chef solo installed Kazoo/ Whistle VoIP API by 2600hz running on your own cluster(s) for more than one cluster just replicate the process.

I would recommend that the chefsolotest1 be kept as the install base for all clusters, and also as first in line for switching IPv6 fail-over.

How to create a safe IPv6 fail-over method of recovery coming up soon….

Any Questions? feel free to contact me. 😉
Especially if its for free T-Shirts 😉