Make sure to stick around for the installation and lab demonstration included in this article.

Unless you've been living under a rock for the past couple of years, you've probably heard about Docker and have dealt with some disruptive co-worker who's constantly in your ear about how Docker's going to change the world. Perhaps you're that person and you're the one annoying someone who just doesn't have time to understand the impact. Well, now you can blow up their little world.

Full disclaimer here: The projects which I'm going to outline in this article are really nothing new relative to our new aggressive timelines for learning technology, but I do want to bring some attention to them and discuss any pitfalls that may come up if you try to use them. Ultimately, I want even more attention drawn to Docker, (here's looking at you, large enterprises running private clouds) with an awesome little project called Kolla, and to discuss some opportunities that open up when you explore what containers are capable of.

On day 2 of the OpenStack Summit (Tuesday, April 26th 2016), Alex Polvi had a small 10 minute window to make a huge impact on a crowd of OpenStack enthusiasts, and he challenged them to think differently about their OpenStack deployments. I think that Alex even knew how powerful the moment would be, when he said "ok, I need you all to free your minds." We're used to OpenStack deployments being painful and difficult to manage, when the reality is that OpenStack is not entirely a huge monolithic application. It's a collection of applications, each with a development sub-team within the larger OpenStack community. See some further video that Alex presented on YouTube. OpenStack actually lends itself pretty well to being containerized. And since its applications can be containerized, then why not use something like Kubernetes to manage them (or in all fairness DC/OS or Docker Datacenter)?

It's important to understand what Alex was really attempting to change, beyond showing that Tectonic would be a perfect place to run OpenStack. He's challenging us to change our mindset and to start a culture shift aimed at the OpenStack community. It's the notion that we might actually be approaching things incorrectly. Maybe containers, container orchestrators, and private cloud solutions work more hand in hand with one another. To push these concepts, and give you something useful to present your leadership teams, I want to draw your attention to the OpenStack/Kolla project, and walk you through installing the project in your own environment.

OpenStack Kolla Project

Kolla is an OpenStack initiative under the big tent OpenStack initiative. It has a lot of moving parts, which include build procedures, deployment and target hosts, and an array of Docker concepts which may or may not be new to users who have played around with Docker in a lab scenario. However, if you've already build a home lab like the one that I've outlined in my article OpenStack for Everyone, then you can learn this deployment too. This is the third deployment option I've covered in the past 5 months.

Kolla’s mission is to provide production-ready containers and deployment tools for operating OpenStack clouds. And I would love to have more developers contributing to this project. If you have any questions about Kolla, please reach out to me and I will direct you to either the IRC group or to the appropriate developer. I'm just getting started with learning new concepts for contributing to the project, and I am interested in contributing to the [OpenStack/Kolla-Kubernetes[(https://github.com/openstack/kolla-kubernetes) project. We'd love to raise awareness for this repository and have more contributors.

NOTE: Be on the lookout for a new article coming soon, in which I'll be using 5 Intel NUC6I7KYK's to run my multi-node home lab deployments.

If you want to purchase these devices for your own home lab, click on the links below:

I also have an email out to PicoCluster to see if they could make me a custom case for new new multi-node rig: 20 Core, 160GB RAM, 2.5TB of SSD. I will be turning my Shuttle SH97R6 into a Ceph storage backend.

Kolla Installation

Now on to the good stuff! I'll lay out the groundwork for how I've deployed an AIO Docker deployment of OpenStack using the Kolla project. It's going to be covered in three main sections:

  • Preparing the local private registry
  • Building the Kolla images on a deployment host
  • Installing an AIO OpenStack deployment to a target host
Prepare your repository

It's really important to have an onsite private registry for your deployment, and although it's not exactly required for an AIO deployment, it is required for a multi-node deployment (which I will document in a couple of weeks, once I receive my NUC6I7KYK nodes).

If you are doing a cloud-underlay within an OpenStack tenant, then make sure you build an instance using Fedora 23 (fedora-23-x86_64). Technically, this should work just fine for the Atomic version, however I ran into issues and YMMV. Use at your own risk; I'm just trying to give you a sure-fire way to get up and running.

Once you've built the instance (or bare metal host), make sure to update the host, make sure it can be reached via DNS, and reboot if there are any kernel updates.

[fedora@fedreg01 ~]$ sudo dnf update -y
[fedora@fedreg01 ~]$ reboot

Next, we want to install a couple of prerequisites (ntp and the atomic application). I know that Red Hat is building a lot of resources around Openshift, (which is wonderful), but I would prefer to run these applications using just Docker commands. But, I just needed a repository and I liked what I was seeing out of the Atomic Private Registry (which is outside the scope of this document). Atomic does come with Docker though, and that makes our install easier. Make sure our services are enabled by default so they come back up after reboots.

[fedora@fedreg01 ~]$ sudo dnf install -y atomic ntp
[fedora@fedreg01 ~]$ sudo systemctl enable docker
[fedora@fedreg01 ~]$ sudo systemctl enable ntpd

Now, I always add some user/group information at this point, so the choice is yours.

[fedora@fedreg01 ~]$ sudo groupadd -g 1001 nixadmins
[fedora@fedreg01 ~]$ sudo useradd -m -g nixadmins -G wheel,users -s /bin/bash {{username}}
[fedora@fedreg01 ~]$ sudo passwd {{username}}

Next, install and run the private registry. This will install/run Openshift and Kubernetes, which we really like (yay Kubernetes)!

[fedora@fedreg01 ~]$ sudo atomic install projectatomic/atomic-registry-quickstart {{registry-ip}}
[fedora@fedreg01 ~]$ sudo atomic run projectatomic/atomic-registry-quickstart {{registry-ip}}

Once that is up, what the hell...test it out!

[fedora@fedreg01 ~]$ curl -v localhost:8443/v1/healthz/ping
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8443 (#0)
> GET /v1/healthz/ping HTTP/1.1
> Host: localhost:8443
> User-Agent: curl/7.43.0
> Accept: */*
>

* Connection #0 to host localhost left intact

Alright! That looks great.

Now, here's a little tidbit that's going to frustrate the living crap out of you (and there's a request out in the wild to work around this issue): By default, tokens offered by the Atomic Registry are only good for 24 hours (ever). So that's pretty secure, right? Maybe adding docker login using service account tokens would be a good idea? Yes, I think so.

To change this, you will want to edit the file sudo vi /etc/origin/master/master-config.yaml, and change the following value:

  tokenConfig:
    accessTokenMaxAgeSeconds: 2592000
    authorizeTokenMaxAgeSeconds: 300

The new value accessTokenMaxAgeSeconds: 2592000 will allow the token to stay around for 30 days. This isn't exactly ideal, but it's an ok band-aid for now.

After this change, you will need to restart the origin container on your host.

[fedora@fedreg01 ~]$ sudo docker ps
CONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS              PORTS               NAMES
7616b236bf57        openshift/origin-docker-registry:latest   "/bin/sh -c 'DOCKER_R"   3 hours ago         Up 3 hours                              k8s_registry.b92852e8_docker-registry-1-cbmhy_default_11d3f204-1130-11e6-bae0-fa163e84d6fd_8868d77d
875b649329e2        cockpit/kubernetes                        "/usr/libexec/cockpit"   3 hours ago         Up 3 hours                              k8s_registry-console.86b68540_registry-console-1-v8pvz_default_1237740c-1130-11e6-bae0-fa163e84d6fd_ff093d7c
f6f21c0c33d7        openshift/origin-pod:v1.3.0-alpha.0       "/pod"                   3 hours ago         Up 3 hours                              k8s_POD.39600065_registry-console-1-v8pvz_default_1237740c-1130-11e6-bae0-fa163e84d6fd_62458f63
91934ba84d1f        openshift/origin-pod:v1.3.0-alpha.0       "/pod"                   4 days ago          Up 3 hours                              k8s_POD.20aa0058_docker-registry-1-cbmhy_default_11d3f204-1130-11e6-bae0-fa163e84d6fd_a5825d4b
90cf6518578d        openshift/origin                          "/usr/bin/openshift s"   4 days ago          Up 3 hours                              origin
[fedora@fedreg01 ~]$ sudo docker restart origin
90cf6518578d
[fedora@fedreg01 ~]$ 

Now you can use your Atomic Private Registry. Any PAM account should be able to log into the registry (this is why we added an account), and once you're logged in you will see the token login command you'll need to enter into your build and target Kolla hosts. So keep track of this login command!

TROUBLESHOOTING: If you run into issues with the registry (I have recently), most issues can be resolved by restarting the containers on the private registry server. Because Openshift Origin and Kubernetes are being used, the containers are opinionated about which order they are started. I have found that this order works (but I need to verify this with the Openshift team):

  1. openshift/origin
  2. openshift/origin-pods
  3. cockpit
  4. openshift/origin-docker-registry
  5. kubernetes

Here is an example:

[root@fedreg01 fedora]# docker ps
CONTAINER ID        IMAGE                                     COMMAND                  CREATED              STATUS              PORTS               NAMES
875b649329e2        cockpit/kubernetes                        "/usr/libexec/cockpit"   About a minute ago   Up 59 seconds                           k8s_registry-console.86b68540_registry-console-1-v8pvz_default_1237740c-1130-11e6-bae0-fa163e84d6fd_ff093d7c
f6f21c0c33d7        openshift/origin-pod:v1.3.0-alpha.0       "/pod"                   About a minute ago   Up About a minute                       k8s_POD.39600065_registry-console-1-v8pvz_default_1237740c-1130-11e6-bae0-fa163e84d6fd_62458f63
596bb5e8f4ff        openshift/origin-docker-registry:latest   "/bin/sh -c 'DOCKER_R"   About a minute ago   Up About a minute                       k8s_registry.b92852e8_docker-registry-1-cbmhy_default_11d3f204-1130-11e6-bae0-fa163e84d6fd_7b5e3877
7eaa8e7d33b5        9e75b0e141bc                              "/usr/libexec/cockpit"   4 days ago           Up 16 seconds                           k8s_registry-console.86b68540_registry-console-1-v8pvz_default_1237740c-1130-11e6-bae0-fa163e84d6fd_247f0b82
9801a6db845d        openshift/origin-pod:v1.3.0-alpha.0       "/pod"                   4 days ago           Up 17 seconds                           k8s_POD.39600065_registry-console-1-v8pvz_default_1237740c-1130-11e6-bae0-fa163e84d6fd_a421731b
91934ba84d1f        openshift/origin-pod:v1.3.0-alpha.0       "/pod"                   4 days ago           Up 18 seconds                           k8s_POD.20aa0058_docker-registry-1-cbmhy_default_11d3f204-1130-11e6-bae0-fa163e84d6fd_a5825d4b
90cf6518578d        openshift/origin                          "/usr/bin/openshift s"   4 days ago           Up 19 seconds                           origin
[root@fedreg01 fedora]# docker stop 90cf6518578d 91934ba84d1f 9801a6db845d 7eaa8e7d33b5 596bb5e8f4ff f6f21c0c33d7 875b649329e2
90cf6518578d
91934ba84d1f
9801a6db845d
7eaa8e7d33b5
596bb5e8f4ff
f6f21c0c33d7
875b649329e2
[root@fedreg01 fedora]# docker start 90cf6518578d 91934ba84d1f 9801a6db845d 7eaa8e7d33b5 596bb5e8f4ff f6f21c0c33d7 875b649329e2
90cf6518578d
91934ba84d1f
9801a6db845d
7eaa8e7d33b5
596bb5e8f4ff
f6f21c0c33d7
875b649329e2
[root@fedreg01 fedora]#

And the cluster will come up cleanly.

NOTE: Each time you restart the Atomic Private Registry cluster, you will have a new token issued. Make a note of the new login command.

Build the Kolla Images

Now you are ready to start with Kolla! Here are some details about how we're going to build this out. We are going to build Ubuntu containers, using source, and we are going to build version 2.0.0 (which is a Mitaka release of OpenStack in Kolla's versioning).

So, you are going to build a base system using Ubuntu 14.04 LTE. It doesn't matter if you choose to build this in OpenStack or on bare metal, (see note about nested virtualization if you choose OpenStack).

As I do with all of my systems, let's prepare the host first.

ubuntu@kolla-build:~$ sudo apt-get update -y && sudo apt-get upgrade -y && sudo apt-get install -y ntp python-pip python-dev libffi-dev libssl-dev gcc git
ubuntu@kolla-build:~$ sudo service ntp reload && sudo service ntp restart  
ubuntu@kolla-build:~$ echo "America/New_York" | sudo tee /etc/timezone  
ubuntu@kolla-build:~$ sudo dpkg-reconfigure --frontend noninteractive tzdata  

Next, we're going to need to update the kernel for our Ubuntu LTE 14.04 server.

ubuntu@kolla-build:~$ apt-get install linux-image-generic-lts-wily  
ubuntu@kolla-build:~$ sudo reboot 

Next, we want to install and prepare Docker to use with our new private registry server.

ubuntu@kolla-build:~$ sudo curl -sSL https://get.docker.io | bash 
ubuntu@kolla-build:~$ sudo usermod -aG docker {{username}}

Make sure that you modify sudo vi /etc/default/docker and ensure that the following DOCKER_OPTS is declared.

DOCKER_OPTS="--insecure-registry {{registry-ip}}:5000"

Now restart Docker, and enter your token provided from the Atomic Registry.

ubuntu@kolla-build:~$ sudo service docker restart  
ubuntu@kolla-build:~$ sudo mount --make-shared /run  
ubuntu@kolla-build:~$ sudo docker login -p {{token}} -e unused -u unused {{registry-ip}}:5000
Warning: '-e' is deprecated, it will be removed soon. See usage.
Login Succeeded
ubuntu@kolla-build:~$ 

Ensure that you have received a Login Succeeded response from the build server. If you don't receive a successful login response, verify your authentication token. If authentication still fails, then restart the registry as I've outlined in the troubleshooting section or contact the IRC channel #cockpit for further assistance. Those guys really rock and are extremely helpful!

Now you're ready to clone the project, install some of the pip prerequisites and test.

ubuntu@kolla-build:~$ sudo git clone https://github.com/openstack/kolla.git /root/kolla  
ubuntu@kolla-build:~$ sudo su -
ubuntu@kolla-build:kolla/# pip install kolla/
ubuntu@kolla-build:kolla/# cd kolla/
ubuntu@kolla-build:kolla/# pip install tox && pip install -U python-openstackclient && tox -e genconfig  

Finally, build and push your packages to your private registry!

root@kolla-build:kolla/# kolla-build --base ubuntu --type source --registry {{registry-ip}}:5000 --push

Once your images have been pushed to the Atomic Private Registry, you will see them all upon logging into the system.

Atomic Private Registry Complete

Prepare your AIO Host

First, we'll need to prepare our host. I'll cover an all-in-one (AIO) deployment first, and I will follow up with a multinode deployment.

Prepare your host by updating it, installing the necessary packages for our installation, and enabling NTP on your host.

ubuntu@kolla-os-test:~$ sudo apt-get update -y && sudo apt-get upgrade -y && sudo apt-get install -y ansible ntp python-pip python-dev libffi-dev libssl-dev gcc git
ubuntu@kolla-os-test:~$ sudo service ntp reload && sudo service ntp restart
ubuntu@kolla-os-test:~$ echo "America/New_York" | sudo tee /etc/timezone
ubuntu@kolla-os-test:~$ sudo dpkg-reconfigure --frontend noninteractive tzdata

If you're doing this deployment within an OpenStack environment, make sure you enable nested virtualization.

echo 'options kvm_intel nested=1' >> /etc/modprobe.d/qemu-system-x86.conf

Next, we're going to need to update the kernel for our Ubuntu LTE 14.04 server.

ubuntu@kolla-os-test:~$ apt-get install linux-image-generic-lts-wily
ubuntu@kolla-os-test:~$ sudo reboot

Next, prepare Docker on your target host(s) by installing it, enabling it to start on boot, and adding any desired users to the docker group.

ubuntu@kolla-os-test:~$ sudo curl -sSL https://get.docker.io | bash
ubuntu@kolla-os-test:~$ sudo usermod -aG docker {{user}}
ubuntu@kolla-os-test:~$ initctl restart docker

Now, we need to set up Docker to talk with our repository. To do this, you're going to first edit /etc/default/docker, and make sure the following DOCKER_OPTS is declared.

DOCKER_OPTS="--insecure-registry {{registry-ip}}:5000"

You will run into errors if you don't have --insecure-registry declared in your config. This is because our Atomic registry is offering a self-signed certificate by default (it's untrusted, not necessarily "insecure").

Restart Docker to commit your configuration changes.

ubuntu@kolla-os-test:~$ sudo service docker restart
ubuntu@kolla-os-test:~$ sudo mount --make-shared /run

Now we can test Docker communication to our newly created registry. Before this will work though, we need to ensure our target host can talk to our registry. When we created the registry, a login command was presented to you. Use this command on your target host, and test your connection.

ubuntu@kolla-os-test:~$ sudo docker login -p {{registry-token}} -e unused -u unused {{registry-ip}}:5000

To prevent any confusion, this token will expire after 24 hours. I can cover extending this token later in this article.

Next, clone the Kolla repository on your target AIO host, and set up any prerequisites (if not completed, your pre-tests and deployment will fail).

ubuntu@kolla-os-test:~$ sudo git clone https://github.com/openstack/kolla.git /root/kolla
ubuntu@kolla-os-test:~$ sudo su - 
ubuntu@kolla-os-test:~# pip install kolla/
ubuntu@kolla-os-test:~# cd kolla
ubuntu@kolla-os-test:~/kolla# cp -r etc/kolla /etc/
ubuntu@kolla-os-test:~/kolla# pip install tox && pip install -U python-openstackclient && tox -e genconfig

PRO TIP: If you want some really cool eye candy while your system is building, I suggest using Weave Scope. Weave Scope is a great tool for managing visualizing and managing your containers, on either Docker hosts, or even more complex orchestrators like Kubernetes, DC/OS, or Docker Datacenter. And luckily for us, it solves two projects. It simplifies the management for our OpenStack Cloud, and we can learn about the Kolla build processes by watching it visually.

Simply install Weave Scope by issuing the following commands:

root@kolla-os-test:~/kolla# sudo wget -O /usr/local/bin/scope https://git.io/scope
root@kolla-os-test:~/kolla# sudo chmod a+x /usr/local/bin/scope
root@kolla-os-test:~/kolla# sudo scope launch

Navigate to http://{{kolla-ip}}:4040, and then continue building out the platform!
Weave Scope Buildout 1
Weave Scope Buildout 2
Weave Scope Buildout 3

Now you're ready to edit your environment using the /etc/kolla/globals.yml file. Here is a sample of mine, which will give you a default installation.

kolla_base_distro: "ubuntu"
kolla_install_type: "source"
openstack_release: "2.0.0"
kolla_internal_vip_address: "169.254.0.25"
kolla_internal_fqdn: "{{ kolla_internal_vip_address }}" # DO NOT CHANGE
kolla_external_vip_address: "{{ kolla_internal_vip_address }}" # DO NOT CHANGE
kolla_external_fqdn: "{{ kolla_external_vip_address }}" # DO NOT CHANGE
docker_registry: "{{registry-ip}}:5000" # CHANGE THIS VALUE TO MATCH
network_interface: "{{eth0}}" # MAY NEED TO CHANGE THIS
neutron_external_interface: "{{eth1}}" # MAY NEED TO CHANGE THIS
kolla_enable_tls_external: "yes"
kolla_external_fqdn_cert: "{{ node_config_directory }}/certificates/haproxy.pem"

PRO TIP: The flags for kolla_enable_tls_external and kolla_external_fqdn_cert are to enable TLS security, but I'm assuming the names alone gave that away. To make this work correctly, either place your trusted certificate in the appropriate directory and change the kolla_external_fqdn_cert declaration, or create your own self-signed certificate with the command below. Kolla will build it for you.

root@kolla-os-test:~/kolla# kolla-ansible certificates

Generate passwords used for your deployment, and run your pre-checks.

ubuntu@kolla-os-test:~/kolla# kolla-genpwd
ubuntu@kolla-os-test:~/kolla# kolla-ansible prechecks

Now, that should all work for you because we explicitly locked in our tagged build for Kolla (2.0.0). After this runs and succeeds, you're ready to deploy!

ubuntu@kolla-os-test:~/kolla# kolla-ansible deploy

And your results should look similar to this.

root@kolla-os-test:~/kolla# docker ps
CONTAINER ID        IMAGE                                                                         COMMAND                 CREATED             STATUS              PORTS               NAMES
edd7708bba86        172.29.248.111:5000/kollaglue/ubuntu-source-horizon:2.0.0                     "kolla_start"           2 hours ago         Up 2 hours                              horizon
35a5972aacb1        172.29.248.111:5000/kollaglue/ubuntu-source-heat-engine:2.0.0                 "kolla_start"           2 hours ago         Up 2 hours                              heat_engine
63e7f609bb7a        172.29.248.111:5000/kollaglue/ubuntu-source-heat-api-cfn:2.0.0                "kolla_start"           2 hours ago         Up 2 hours                              heat_api_cfn
cfa51974d69c        172.29.248.111:5000/kollaglue/ubuntu-source-heat-api:2.0.0                    "kolla_start"           2 hours ago         Up 2 hours                              heat_api
802271d08281        172.29.248.111:5000/kollaglue/ubuntu-source-neutron-metadata-agent:2.0.0      "kolla_start"           2 hours ago         Up 2 hours                              neutron_metadata_agent
6a078e3aeb03        172.29.248.111:5000/kollaglue/ubuntu-source-neutron-l3-agent:2.0.0            "kolla_start"           2 hours ago         Up 2 hours                              neutron_l3_agent
a860ea064391        172.29.248.111:5000/kollaglue/ubuntu-source-neutron-dhcp-agent:2.0.0          "kolla_start"           2 hours ago         Up 2 hours                              neutron_dhcp_agent
3110e58f5d5c        172.29.248.111:5000/kollaglue/ubuntu-source-neutron-openvswitch-agent:2.0.0   "kolla_start"           2 hours ago         Up 2 hours                              neutron_openvswitch_agent
d9abceb88c86        172.29.248.111:5000/kollaglue/ubuntu-source-neutron-server:2.0.0              "kolla_start"           2 hours ago         Up 2 hours                              neutron_server
9b0ace8beb41        172.29.248.111:5000/kollaglue/ubuntu-source-openvswitch-vswitchd:2.0.0        "kolla_start"           2 hours ago         Up 2 hours                              openvswitch_vswitchd
b31517600c7e        172.29.248.111:5000/kollaglue/ubuntu-source-openvswitch-db-server:2.0.0       "kolla_start"           2 hours ago         Up 2 hours                              openvswitch_db
1395d5fff110        172.29.248.111:5000/kollaglue/ubuntu-source-nova-ssh:2.0.0                    "kolla_start"           2 hours ago         Up 2 hours                              nova_ssh
e487ee143076        172.29.248.111:5000/kollaglue/ubuntu-source-nova-compute:2.0.0                "kolla_start"           2 hours ago         Up 2 hours                              nova_compute
f0efd44e8701        172.29.248.111:5000/kollaglue/ubuntu-source-nova-libvirt:2.0.0                "kolla_start"           2 hours ago         Up 2 hours                              nova_libvirt
8b5fca3bf641        172.29.248.111:5000/kollaglue/ubuntu-source-nova-conductor:2.0.0              "kolla_start"           2 hours ago         Up 2 hours                              nova_conductor
debf05d722a3        172.29.248.111:5000/kollaglue/ubuntu-source-nova-scheduler:2.0.0              "kolla_start"           2 hours ago         Up 2 hours                              nova_scheduler
d941ce1ab27d        172.29.248.111:5000/kollaglue/ubuntu-source-nova-novncproxy:2.0.0             "kolla_start"           2 hours ago         Up 2 hours                              nova_novncproxy
b0ad198e7259        172.29.248.111:5000/kollaglue/ubuntu-source-nova-consoleauth:2.0.0            "kolla_start"           2 hours ago         Up 2 hours                              nova_consoleauth
cdebfa5f1a96        172.29.248.111:5000/kollaglue/ubuntu-source-nova-api:2.0.0                    "kolla_start"           2 hours ago         Up 2 hours                              nova_api
435b6f1fb4ba        172.29.248.111:5000/kollaglue/ubuntu-source-glance-api:2.0.0                  "kolla_start"           3 hours ago         Up 3 hours                              glance_api
4d730026768e        172.29.248.111:5000/kollaglue/ubuntu-source-glance-registry:2.0.0             "kolla_start"           3 hours ago         Up 3 hours                              glance_registry
e2b24e529be8        172.29.248.111:5000/kollaglue/ubuntu-source-keystone:2.0.0                    "kolla_start"           3 hours ago         Up 3 hours                              keystone
f7d321e9747f        172.29.248.111:5000/kollaglue/ubuntu-source-rabbitmq:2.0.0                    "kolla_start"           3 hours ago         Up 3 hours                              rabbitmq
96197a008138        172.29.248.111:5000/kollaglue/ubuntu-source-mariadb:2.0.0                     "kolla_start"           3 hours ago         Up 3 hours                              mariadb
3dd590a33709        172.29.248.111:5000/kollaglue/ubuntu-source-memcached:2.0.0                   "kolla_start"           3 hours ago         Up 3 hours                              memcached
6e7a12fd808d        172.29.248.111:5000/kollaglue/ubuntu-source-keepalived:2.0.0                  "kolla_start"           3 hours ago         Up 3 hours                              keepalived
8606c4e75d01        172.29.248.111:5000/kollaglue/ubuntu-source-haproxy:2.0.0                     "kolla_start"           3 hours ago         Up 3 hours                              haproxy
0c249f29d006        172.29.248.111:5000/kollaglue/ubuntu-source-cron:2.0.0                        "kolla_start"           3 hours ago         Up 3 hours                              cron
b2eea4042e1f        172.29.248.111:5000/kollaglue/ubuntu-source-kolla-toolbox:2.0.0               "/bin/sleep infinity"   3 hours ago         Up 3 hours                              kolla_toolbox
4a8a358b6bb4        172.29.248.111:5000/kollaglue/ubuntu-source-heka:2.0.0                        "kolla_start"           3 hours ago         Up 3 hours                              heka
root@kolla-os-test:~/kolla#

PRO TIP: I'll warn you now that you're going to run into an issue where you can't create projects. You'll receive a warning that there's no _member_ role. This is an easy fix, and we can create this on our own. As admin, navigate to the Identity / Roles section. Select Create Role and add the role _member_. It's really that straightforward, it just wasn't created for us by default.
Add member Role

For a couple of screenshots.
Dockerized Openstack Deployment

Openstack Deployed

Details About Your Deployment

One of the cool things about OpenStack projects in general is that each one of them is unique and has different levels of maturity. I highly suggest that you poke around once your deployment is ready. One thing you'll eventually find is that some management tools are baked right in (similar to OpenStack-Ansible). Below are some examples of this.

Configuring Services and Viewing Logs

Services are really easy to configure in Kolla. If you go to /etc/kolla/ you'll find the configuration directories for each of the services required by OpenStack. This is also true of the logs.

Creating Flat Public Network

At some point, I'm sure you're going to want to use your OpenStack deployment, right? Well, you're going to need a public network interface so that your instances can communicate with the outside world. So let's get started.

Look at the file /etc/kolla/neutron-server/ml2_conf.ini. By default the following drivers are enabled by Kolla: flat,vlan,vxlan. If you're using this as an AIO, then you'll want to configure your public network type as flat. In the ml2_conf.ini, you'll see that the interface associated with flat is physnet1. This is what you're going to use with your public network. Below is an example of how you're going to configure this in Horizon.
Horizon Configure Public Network

Once you save your network, we need to create a subnet. Create your subnet like I have below and you won't overlap with your default public subnet for the OpenStack-Ansible AIO deployment I walked you through in a previous article.
Horizon Configure Public Subnet

Horizon Configure Public DHCP

HA Proxy Statistics

By default, the HA Proxy statistics page is enabled. You can get to the container by either viewing the config file located at /etc/haproxy/haproxy.cfg, using Weave Scope and accessing the container, or by entering the container directly by issuing docker exec -it haproxy bash on your OpenStack AIO target host. Look for the configuration lines for HAProxy for the stats page.

root@kolla-os-test:/kolla# docker exec -it haproxy bash
(haproxy)[root@kolla-os-test /]# cat /etc/haproxy/haproxy.cfg
listen stats
   bind {{eth0-ip}}:1984
   mode http
   stats enable
   stats uri /
   stats refresh 15s
   stats realm Haproxy\ Stats
   stats auth openstack:{{haproxy-os-password}}

Navigate to your host: http://kolla-os-test:1984. Enter the credentials you found in the haproxy config file.
HAProxy Stats

Achievement Unlocked: Docker Options

Now that we've deployed our OpenStack Environment with Docker, this really gives us some interesting options: Kubernetes, DC/OS, or Docker Datacenter can be used as the control plane, security tools like Twistlock or Stackrox may be leveraged, and amazing tools like WeaveWorks Scope can be used to manage our cluster. Would you like to see this work? I did.

Weave Scope:

Now we can use Weave to monitor our OpenStack Cluster! Simply install Weave like so:

root@kolla-os-test:~/kolla# sudo wget -O /usr/local/bin/scope https://git.io/scope
root@kolla-os-test:~/kolla# sudo chmod a+x /usr/local/bin/scope
root@kolla-os-test:~/kolla# sudo scope launch

Wait for the containers to come up, and log into the interface at: http://{{kolla-os-test-ip}}:4040.
Weave Scope

Now, you can access and control each individual container within the cluster.
Weave Scope Manage

So, what do you think about a Dockerized version of OpenStack now? If you didn't think that containers would change things before, then as Alex said "free your mind," and think again!