I really had no idea what I was getting into when I decided to build a Kubernetes Pi cluster, or if it would even work. The response on Twitter was absolutely overwhelming and flattering too! As promised, I've decided to document my findings so that anyone/everyone can enjoy the same setup.
History
Disclaimer: My thoughts are completely my own, and in no way reflect the feelings of my friends, co-workers, management, or employer.
I've run a lot of container clustering environments, built on top of many various operating systems and within a lot of cloud computing environments. Why? The answer is two-fold.
The first reason is really simple and it's the reason for my disclaimer. I work as a Principal Security Technologist for a group called ASTRA within AT&T. It's my job to think about the security of the new "orchestration stack." I follow a lot of issues and merges related to security on GitHub. I'm the guy who quietly reads tweets and GitHub issues/merges with my morning coffee. Trying to keep up with "security" merges and issues across the larger community of disjointed and fragmented projects isn't an easy task. The more I see and build, the better I can understand it.
The second reason is because I'm naturally a curious person. I built my first container orchestration stack using Docker Swarm. It was very simple, but it didn't feel complete (like it does today). I looked at Mesos. Mesos was very promising, but to me it seemed very complex for a container orchestration environment. Then, I tried Kubernetes. Kubernetes seemed to really hit a sweet spot. However, that's not to say that one platform is "better" than another at all. Each orchestration tool has really positive features, but Kubernetes seemed to be the Goldilocks fit out of the three.
My first attempt at Kubernetes was building a CoreOS cluster within an Openstack IaaS, and then later on Amazon AWS and Azure. Then I tried Atomic/Fedora within Openstack, which is what I used for writing my Terraform deployment on GitHub. Later, I moved on to the Google Compute Engine demo using Vagrant. I've even run the local Vagrant demo. Most people seem to start with the Vagrant demos because of their ease of use, but I've never been known for taking the easy route. I've even dug deep into Magnum on Openstack. But it wasn't until I ran the Vagrant demos, and after some discussion on the Kubernetes Slack Group when it really hit me; the Hyperkube. So Kubernetes can run on top of Docker? Ok, it's time to have some fun with this.
And that's when I wanted to run and document a Kubernetes Pi Cluster.
The Hardware
Your budget really determines how many cluster nodes you can (or want to) run. I suggest having at least three Pi's in your setup. Here's the breakdown:
- Development Host (1)
- Kubernetes Master (1)
- Kubernetes Nodes (1-x However many you want)
Required Materials List:
(I'm going to link to Amazon to make it easier for you)
- Raspberry Pi 2 (3-6 of these)
- GeauxRobot Dog Bone Stack (1 to 2 of these)
- Netgear GS108E 8-Port Switch (1)
- Anker PowerPort 10 USB PowerHub (1)
- Micro-SD Cards 32GB (3-6 for each Pi in your cluster)
Optional Materials List:
- Addicore Raspberry Pi Heatsink (3-6 if you want them for overclocking)
- Anker Micro-USB Cables (1 Pack of 6 x 1')
- Cable Matters CAT6 Ethernet (2 Packs of 5 x 1')
- USB Multi-Card Reader (for MicroSD) (1 Needed for OS X Laptops)
To make things really nice, I screwed my case directly to the Netgear switch housing. I removed the two screws from the back of the switch, and used the bottom of the Dog Bone case as a template for my drill holes.
Once I had the switchboard removed from the housing and my drill holes marked, I used a hole-punch tool to score [indent] my drill marks. I pre-drilled with a 1/16" drill bit, and then used a 1/8" drill bit for my final hole. I didn't want mess up my housing, so I took this really slow! OCD I know, but the end result was well worth it.
Lastly, I screwed legs onto the switch-case housing and placed each RPi layer on one-by-one. To screw on the legs/rails use a 5mm socket as shown.
Once your rig is all put together, move on to the software installation.
The Software
Oh Hyperkube! The best part about Hyperkube is that you can run it on any host that runs a recent version of Docker. This opens up possibilities to run on ArchLinuxARM, Rasbian, and a few other environments. But then I found this little gem HERE. I thought that this was going to be a lot more difficult, but this solved everything for me! And I was going to use ArchLinuxARM anyway, because it appears to have some of the more recent versions of Docker. This was awesome! But I'm security minded, so I really wanted to find out what's under the hood.
There's also another project called Hypriot which I'll get back to later (since I'm using that too). Let's go over the installation for ArchLinuxARM, Docker, and Kubernetes. I'll cover some of the things I've found throughout the process.
Installation
If you're using a MacBook Pro like I am (or Windows on a laptop) then you'll want to follow this entire article from the very beginning. If you're using Linux, you're in luck and can skip this first section entirely.
One of the first things I do when I get a new MacBook (or Windows laptop from work) is install Virtualbox. I use Virtualbox for a lot of Vagrant micro-environments, but I also use it to provide some much needed tools that I just can't get from OS X/Windows, like ext4
filesystem tools. This comes in handy when building a Raspberry Pi image. If you're using NOOBS or Raspbian, then you can use an .img
to flash your MicroSD card via OS X. ArchLinuxARM assumes you're a more technical user than the average NOOBS user, and therefore you're going to need access to an ext4
filesystem. We can get around this by using a Virtualbox instance.
Preparation for OSX and Windows Users (via Virtualbox)
Install Virtalbox and Virtualbox Extensions Pack. You can obtain both (for your system) at the Virtualbox Download page.
Once you have both of these installed, you will need to install a Linux distribution. I really don't care which one you choose. Ubuntu, Fedora...does it really matter? I'd rather stay away from the debate. I picked Fedora 23 because I talk with the Red Hat guys all the time over IRC; and they've been super friendly/helpful. If you're following this article precisely, then you can download Fedora 23 Cloud.
During the installation, I always add a couple of steps. If I'm installing Fedora, then I make the user an administrator (add sudo) and create a separate group for the user. I go back and change the group name later to nixadmins
. You'll see how I use this for the Kubernetes cluster in the walk-through as well.
Using your USB MicroSD Card Reader
On newer versions of MacBook Pro's and PC's the SD card reader is not a device recognized by Virtualbox or VMWare Fusion. For OSX (and in some cases Windows) you will need to plug in your USB MicroSD card reader, and allow Virtualbox to control it. Do this before continuing on to the next section.
Installation for ArchLinuxARM on RPi
For Linux users, this is your starting point. For OSX (and some Windows users) welcome! Let's get started with imaging our RPi SD card.
Make sure you have git
installed first:
[root@jinkitfedlt01 ~]# dnf install -y git
Download the repository used for building the Kubernetes Master/Node cluster members:
[root@jinkitfedlt01 ~]# git clone https://github.com/luxas/kubernetes-on-arm.git
[root@jinkitfedlt01 ~]# cd kubernetes-on-arm
Verify which disk your MicroSD card is mounted as:
[root@jinkitfedlt01 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 39.5G 0 part
├─fedora-root 253:0 0 35.5G 0 lvm /
└─fedora-swap 253:1 0 4G 0 lvm [SWAP]
sdb 8:16 1 59.5G 0 disk
├─sdb1 8:17 1 64M 0 part
└─sdb2 8:18 1 1.1G 0 part
sr0 11:0 1 1024M 0 rom
sr1 11:1 1 1024M 0 rom
[root@jinkitfedlt01 ~]#
NOTE: You can use sudo fdisk -l
as Lucas recommends also.
Run the following command to install the OS and helpful Kubernetes post-installation scripts:
[root@jinkitfedlt01 kubernetes-on-arm]# ./sdcard/write.sh /dev/sdb rpi-2 archlinux kube-archlinux
And that's it! I removed the card, inserted a new one and reran the command. That simple.
Post-Installation Tasks
So now that the OS has been installed and your Raspberry Pi is running, I really finding the Raspberry Pi via nmap
, as recommended by the folks over at Hypriot. So that's what we're going to do to find our new cluster members, and then we're going to run through some tasks to set up the cluster and lock it down.
Still using your Virtualbox Image (in my case, the Fedora Workstation host), install nmap
. For Fedora 23, type the following:
[user@jinkitfed2301 ~]$ sudo dnf install -y nmap
After that is installed, run the following to find your new cluster hosts.
[user@jinkitfed2301 ~]$ sudo nmap -sP 192.168.1.0/24 | grep alarmpi # << YOUR LOCAL SUBNET
Starting Nmap 6.47 ( http://nmap.org ) at 2015-11-21 16:41 EST
Nmap scan report for alarmpi (192.168.1.30)
Nmap scan report for alarmpi (192.168.1.37)
Nmap scan report for alarmpi (192.168.1.40)
Nmap scan report for alarmpi (192.168.1.41)
Nmap scan report for alarmpi (192.168.1.42)
Nmap scan report for alarmpi (192.168.1.43)
That's how you find the hosts on your network, without connecting a monitor to each one of your RPis. Pretty slick, right?
Now we need to lock down our hosts a little. The GitHub project provided by Lucas Käldström seems to install a base installation of ArchLinuxARM. As such, the steps for limiting access are the same.
Create Users/Groups
[root@alarmpi ~]# groupadd -g 1001 nixadmins
[root@alarmpi ~]# useradd -m -g nixadmins -G users,docker -s /bin/bash user01 # << YOUR DESIRED USERNAME
[root@alarmpi ~]# passwd user01
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Disable SSH Root Login
Edit /etc/ssh/sshd_config
and change this line:
#PermitRootLogin yes
...to this...
PermitRootLogin no
Set Hostname
[root@alarmpi ~]# hostnamectl set-hostname kubemaster # << YOUR DESIRED HOSTNAME
Install sudo
[root@kubemaster ~]# pacman -S sudo --noconfirm
Edit sudo
[root@alarmpi ~]# vi /etc/sudoers
Add a line to /etc/sudoers
file:
%nixadmins ALL=(ALL) ALL
On the Kubernetes Master Pi
You want to add kubectl
to /usr/bin/
, so do the following on your Kubernetes Cluster Master:
[bjozsa@kubemaster ~]$ sudo find / -name kubectl
/etc/kubernetes/source/images/kubernetesonarm/_bin/141015_1152/kubectl
[bjozsa@kubemaster ~]$ sudo cp /etc/kubernetes/source/images/kubernetesonarm/_bin/141015_1152/kubectl /usr/bin/
Reboot
Starting Your Cluster
Now that you've limited access and prepared your cluster members, you can start bringing up your Kubernetes cluster. This is super easy!
Get yourself familiar with the kube-config
command, because that's what's going to be used on your cluster master and workers:
[root@kubemaster ~]# kube-config
Welcome to kube-config!
With this utility, you can setup Kubernetes on ARM!
Usage:
kube-config install - Installs docker and makes your board ready for kubernetes
kube-config upgrade - Upgrade current Operating System. Example for Arch Linux, update the packages to latest version.
kube-config build-images - Build the Kubernetes images locally
kube-config build-addons - Build the Kubernetes addon images locally
kube-config build [image] - Build an image, which is included in the kubernetes-on-arm repository
- Options with the luxas prefix:
- luxas/alpine: My alpine base image
- luxas/bench: Benchmark your ARM board compared to a Raspberry Pi 1. Based on Roy Longbottoms Benchmarks
- luxas/go: My Golang image
- luxas/nginx: Simple nginx image based on alpine. Used mostly for testing.
- luxas/nodejs: node.js image based on alpine.
- luxas/raspbian: A Raspbian base image. Based on resin/rpi-raspbian.
kube-config enable-master - Enable the master services and then kubernetes is ready to use
- FYI, etcd data will be stored in the /var/lib/etcd directory. Backup that directory if you have important data.
kube-config enable-worker - Enable the worker services and then kubernetes has a new node
kube-config enable-addon [addon] - Enable an addon
- Currently defined addons
- dns: Makes all services accessible via DNS
- registry: Makes a central docker registry
- kube-ui: Sets up an UI for Kubernetes. Experimental. Doesn't show anything useful just now, waiting for upstream.
kube-config disable-node - Disable Kubernetes on this node, reverting the enable actions, useful if something went wrong
kube-config disable - Synonym to disable-machine
kube-config disable-addon [addon] - Disable an addon, not the whole cluster
kube-config delete-data - Clean the /var/lib/etcd directory, where all master data is stored
kube-config info - Outputs some version information and info about your board and Kubernetes
kube-config help - Display this help
[root@kubemaster ~]#
Bring Up the Master
Bring up your Master with the following command:
[root@kubemaster ~]# kube-config enable-master
Bring Up the Workers
Bring up your Workers with the following command:
[root@kubemaster ~]# kube-config enable-worker
For the workers, you will need to point to your master
ip address.
Verify the Cluster
Give your master and workers some time to come up cleanly. It will take a moment for Hyperkube to come up and for it to bring up the other supporting containers.
When you're done, you can verify your running containers:
[root@kubemaster ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
42e9b805a1bb kubernetesonarm/kube2sky "/kube2sky -domain=c 29 minutes ago Up 29 minutes k8s_kube2sky.2ad64847_kube-dns-v8-i3hpk_kube-system_8c835512-8ca6-11e5-98e3-b827eb106f3b_82244e7d
88fc4ff701ff kubernetesonarm/exechealthz "/exechealthz '-cmd= 46 hours ago Up 46 hours k8s_healthz.7f43774d_kube-dns-v8-i3hpk_kube-system_8c835512-8ca6-11e5-98e3-b827eb106f3b_7eb8f92d
7865bef76c91 kubernetesonarm/skydns "/skydns -machines=h 46 hours ago Up 46 hours k8s_skydns.876a8ff7_kube-dns-v8-i3hpk_kube-system_8c835512-8ca6-11e5-98e3-b827eb106f3b_b8bedbf9
72c3fca16f97 kubernetesonarm/etcd "/usr/bin/etcd -data 46 hours ago Up 46 hours k8s_etcd.7ec48ca2_kube-dns-v8-i3hpk_kube-system_8c835512-8ca6-11e5-98e3-b827eb106f3b_a47beffd
c54b3ebb7cda kubernetesonarm/hyperkube "/hyperkube schedule 46 hours ago Up 46 hours k8s_scheduler.e5efe276_k8s-master-192.168.70.39_kube-system_447c171dfac8ae64dc585f8e9cbfa7e6_27d9b683
b42163315152 kubernetesonarm/hyperkube "/hyperkube apiserve 46 hours ago Up 46 hours k8s_apiserver.c358020f_k8s-master-192.168.70.39_kube-system_447c171dfac8ae64dc585f8e9cbfa7e6_51983436
83aa7fa6cd77 kubernetesonarm/hyperkube "/hyperkube controll 46 hours ago Up 46 hours k8s_controller-manager.7047e990_k8s-master-192.168.70.39_kube-system_447c171dfac8ae64dc585f8e9cbfa7e6_829bee2c
7fc79fc660d2 kubernetesonarm/pause "/pause" 46 hours ago Up 46 hours k8s_POD.7ad6c339_k8s-master-192.168.70.39_kube-system_447c171dfac8ae64dc585f8e9cbfa7e6_8b01bbe2
a2caa1b32959 kubernetesonarm/hyperkube "/hyperkube kubelet 46 hours ago Up 46 hours k8s-master
e752cd61bd87 kubernetesonarm/hyperkube "/hyperkube proxy -- 46 hours ago Up 46 hours k8s-worker-proxy
[root@kubemaster ~]#
You'll notice the following important containers running:
- kubernetesonarm/etcd (datastore/discovery)
- hyperkube schedule (kubemaster service)
- hyperkube apiserver (kubemaster service)
- hyperkube controller (kubemaster service)
- hyperkube kubelet (kubemaster/worker service)
- hyperkube proxy (kubemaster/worker service)
Additional Services
You'll notice that kube-config
had some add-on features, which are very useful (like DNS). I am hosting my own Portus Registry Server internally, but the DNS add-on is very useful.
When you're done, you can verify your running containers:
[root@kubemaster ~]# kube-config enable-addon dns
Building ARM Containers
So you may remember that I said I'm using Hypriot? Well, I'm always interested in new environments and I've decided to use it for development because it includes docker
, docker-compose
, and docker-machine
. It's really an OS built for Docker! But some of the "easy" features made me a little uneasy about access/security (this is me being paranoid).
To be continued...and edited by my lovely wife :)