So now that you have an operational Openstack deployment, it's time to build something really cool on top of it; a Microservices Environment so you can use your Openstack deployment as a true PaaS!

Containers, storage, and pods. Oh My!

Wrapping your head around what makes up a Microservices architecture is actually one of the hardest steps because of the confusion of terms and aggressive development cycles for the major players. Kubernetes has changed many things along the way (pre-version 1.x), but since they have released version 1.x things have calmed down quite a bit.

information is a bitch

If you're in my world, the image above represents you pretty accurately! Sometimes, we just get overwhelmed with the sheer amount of information available! As an example, here's a list of recent industry terms and players in the 'microservices' space (which I'm not going to provide any background on just yet):

  • Docker
  • Kubernetes
  • CoreOS
  • Microkernel
  • RancherOS
  • Fleet
  • Mesos
  • Terraform
  • k8s
  • Atomic
  • Flannel
  • etcd

Oh my goodness! How do you make any sense of that if you're new to Microservices? It's almost better if you're new at it all. I've actually found it to be easier to explain the architecture, reasons, and technical definitions to my wife than it is to explain it to folks who've been in our industry for years. It's because you really need to throw out everything you thought you knew about computing, and start over again with a truly application-centric mindset this time.

Categorizing the Components

It's best to think of each of the components above as building blocks. Much of what's defined about microservices comes from Linux developers, who attempt to stay true (for the most part) to the Linux Contract: make it small, and do it well. In rough terms, think of each of the components above as materials for building a house. You have your foundation (kernel, or microOS), the framework (Docker and Docker Engine), the electrical and plumbing (networking) and so on. I'm going to consider the contractors (or workers who are building the house) as orchestration tools. Sometimes the contractors bring their own tools, and they have locks (security components) to protect their tools, and make them available for others to use (API's) or lock them up (closed vs open API's). The contractors also can bring their own security elements to the project. What I mean is, they can bring their own contractor lock and put it on a door to a room, locking others out (they're protecting the security of other components within the project). So let's break down each of these components.


Remember, Docker is the framework of our house. And the framework relies on a firm structure in order to support the rest of the home materials. This is identified as the Kernel. Each Docker container uses the kernel (at least today). There are some purpose-build Operating systems we're going to identify for our Microservices Architecture project.

  • Atomic
  • CoreOS
  • RancherOS
  • Snappy
  • boot2docker

Note: There are others, but I'm limiting my scope to these players, because each of them brings something different to the table.

In this case, I am going with Atomic. CoreOS is spectacular, and I will write about it too, but for now...Atomic. Atomic was my choice because of RPM-OSTree; a really fascinating feature that is outside of this walk-through (which is already longer than I originally intended).


The reason why I call Docker the "framework" is because that's really want it is! Docker defined a container format, they introduced a wonderfully created API, and they did something amazing (this is the part I love): they encouraged everyone to use it! I'm talking about Docker Registry Hub, and if you haven't used it, I encourage it! My personal belief is, if you want something to be successful, share it. That's why I'm writing this article in the first place.

Let's not forget that there are other container technologies available, but Docker is the one that shook up the technology and made everyone take notice. Because of the team's incredible work, Docker is now setting the standards for container technology (and how to run containers). That's why we're all talking about it. Is "Docker" the container equivalent of "Kleenex"? Well, that's up for debate. Here's the list of contenders in this container space:

  • Docker (Obviously)
  • Rocket
  • LxD
  • Pivotal Garden

We're obviously going to choose Docker for our container standard. They play well with others and they respect the Linux contract we talked about earlier.


Next, we have to talk about networking. If you've ever run Docker containers and taken a quick look at your networking, you'll realize that you now have a Docker bridge set up.

bash-3.2$ ssh docker@
docker@'s password: 
Boot2Docker version 1.8.1, build master : 7f12e95 - Thu Aug 13 03:24:56 UTC 2015
Docker version 1.8.1, build d12ea79
docker@kube-dev:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether fe:b8:69:48:4f:52 brd ff:ff:ff:ff:ff:ff
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:ae:c1:4a brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feae:c14a/64 scope link 
       valid_lft forever preferred_lft forever
4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:85:39:45 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe85:3945/64 scope link 
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:07:f9:e1:84 brd ff:ff:ff:ff:ff:ff
    inet scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:7ff:fef9:e184/64 scope link 
       valid_lft forever preferred_lft forever

See that docker0 interface? Well, try getting around that without a networking solution. It's like trying to turn on a water faucet without being connected to the water main. You're not getting anywhere outside of your container environment until you address it. You need the plumbing to the rest of the world. There are projects that allow us do this, and thus route to the outside world.

  • Pipework
  • Weave
  • Socketplane (a recent Docker acquisition)
  • Flannel

I'm choosing Flannel for this building block. It's simple, effective, and it get's me where I need to be with laser focus. It also respects the Linux contract and that really pleases me.


Well, we need something to put this all together. We have our purpose-built OS. We have our container standard and networking. But, the magic is really in the orchestration. The reason I compare this component to contractors is because you can have great contractors, who are experienced craftsmen with a wide range of experience and expertise and put together an excellent finished product (with many features and details), or you can have contractors with limited areas of focus and skill-sets. I really had to restrain from using an analogy with "bad" contractors, because this doesn't really happen often with a project of this magnitude. There are just contractors who have a limited focus, but everyone contributing to the overall microservices components are doing great work, and many of them respect the linux contract. Respecting the contract will keep them around longer (in my opinion). These orchestration tools consist of the following:

  • Kubernetes
  • Mesos
  • Fleet

So, those of you who know Microservices well are thinking, "wait...Fleet and Kubernetes/Mesos? This guy just lost me." I'm only throwing in Fleet because of the scheduling aspect. Those of you who aren't aware of these players, you should really read up. There are so many things that Kubernetes and Mesos do, but one of the things that all three of them do is scheduling the container instances, and scheduling is an orchestration task with limited focus. That's where the similarities start to break down. Kubernetes and Mesos do so much more, and that will be discussed more in-depth in a future article.

For now, I am going with Kubernetes which should tell users a lot about where this article is going. If you want to learn Kubernetes, this is a great starting point.

Wait, Where's the General Contractor?

For those of you who understand construction, you know that there's always a boss; someone who sort of keeps everyone and everything else in order and keeps track of the overall project and goals. That, my friend, is where a distributed key-value store comes into play. So the contractors can lock up various components of the project, but who makes sure the entire project is locked up and running correctly? Who makes sure that the entire project is synchronized? Well, that's the foreman of course (I'm not talking about The Foreman Project; that's something different)! In this case, we have a few players in this space:

  • etcd (by the CoreOS team)
  • Zookeeper
  • doozer
  • Consul

I'm going with etcd, mainly because it's easy to use, and it has useful security features that overlay on top of some other tools we've chosen for our Microservices architecture.

Before we get started in the lab

For some of you, this will clear up a lot! For others, it may confuse you more (there are many options available). One thing to remember is that all of these tools are building blocks; if they respect the contract, can all play nicely with one another. It's your job to figure out which tool is right for you and your organization. You don't want to bring a sledgehammer for a job that requires a 10oz hammer. Truth be told, my choice for an OS and requirements for additional security and configuration management (like OS rollback features in Atomic) dictated my direction in large from the start. For an example, I picked Atomic because of the use of RPM-OSTree, and some additional baked in security features. As long as Atomic continues to respect the contract with other players, I will continue to use Atomic. Someone else may want to use CoreOS because they prefer automatic system updates, and they don't want to even think about the MicroOS. It definitely gets confusing, but one of the ways to clear up confusion is to try it. And that's what I'm here for.

So let's get started with our lab buildout. Like my Openstack walk-through, head over to Part 2 so we can begin building our lab with Atomic, Docker, Flannel, Kubernetes, and of course etcd.