It's been a really long time since I've written an article, so I really feel like I need to make this a good one. Sometimes I am just awestruck with the amount of awesomeness on GitHub! Without projects like Openstack, Docker, CoreOS, Atomic, Kubernetes, and so many more I can't even count or keep up with, my career would be completely different and nowhere near as enjoyable as it is today. I owe everything to them and because of open source projects, I really enjoy my job. I can truly say that what I do is my passion, and I really enjoy telling others about these amazing projects. I just can't keep all of this good information to myself. With all that said, I have another great project to share with you today.

Openstack + Ansible + Containers: One Incredible Project

As you can tell from my blog posts, I'm a huge fan of cloud automation and containers. Containers are the perfect unit of transport! Now add some Ansible orchestration and the result is...wait, Openstack...and it works as advertised?! I thought to myself this has to be too good to be true, right? Well, it is true. So let me share some information about it, and give you some background on why I think this project is so incredible.

Project Details

Project Repository

Project Documentation

Background

In 2015, I saw two videos that drove my curiosity nuts! The first video was a presentation at the Openstack Summit held in Vancouver, and the talk was presented by Rackspace. It was a project originally started in Stackforge, but later moved to GitHub.

Deploying Openstack with Ansible

In the video, Rackspace developers explained why they were using Ansible for their Openstack Deployments. The result [for me] was a few profound takeaways:

  1. The Ansible deployment uses all upstream GitHub Openstack native repositories; this is a clean, unmodified Openstack native deployment. This means no management wrapper APIs, no "mysterious" wrapper for starting/stoping Linux services, and I can build in my own upstream projects (no plugins that force me to rebuild my environment from scratch)
  2. The deployment was production ready (unlike DevStack)
  3. The deployment was production scale-able (unlike DevStack)
  4. The deployment uses LXC containers as the deployment unit of management (so developers are presented with a more "true" Openstack development framework)
  5. Users can easily modify their Git sources (public or private)

Dude, seriously?! So yeah, these guys got my attention. My thoughts, "I'll check out Ansible when I get a chance," because at the time, my plate was pretty full like everyone else's.

Ansible for Deploying and Orchestrating Openstack

A few months passed before I could actually start looking into this "magical deployment" of an extremely complicated Cloud environment, but before I got hit with another reminder of what I was missing. At the very next Openstack Summit in Tokyo, a few folks compared all of the popular orchestration too, including the new "cool kids" on the scene, Ansible and Salt.

[Comparing Ansible, Puppet, Salt and Chef. What's best for deploying and Managing Openstack](https://www.openstack.org/summit/tokyo-2015/videos/presentation/chef-vs-puppet-vs-ansible-vs-salt-whats-best-for-deploying -and-managing-openstack)

Again, I had some very impactful take-aways from watching another Openstack Summit video! (I'm feeling great at this point).

If the solution is too complex, your company could ultimately lose some of it's intended audience. Sysops teams simply won't use it, and if approached incorrectly (without formal training or operational buy-in) you'll quickly realize that you've wasted resources; both money and peoples time. Sure you have six or seven really strong users of your awesome, programmatic-driven IaC "product X". But if your team has twenty engineers, and those six have become dedicated to writing your deployment plans, what are you gaining? I feel like this is wasteful, and it can even impact team morale. Additionally, now you're going to be tasked with finding talent who can orchestrate with "product X".

Ansible is very operations-driven and the learning curve is extremely easy. If you're hiring sysops personnel, then using Ansible will be the most natural for them. Here is a list of considerations when looking at the learning curve for some IaC solutions:

  1. Ansible: Which takes very little time to learn, and is very operations-focused (operations driven, YAML or CLI-direct-based tasks, uses playbooks in sequence to perform tasks)
  2. Salt: Which breaks into the client-less barrier (very fast, efficient code that uses the concepts of grains, pillars, etc)
  3. Puppet: Which starts to introduce YAML concepts (client/server based model, with modules to perform tasks)
  4. Chef: Requires Ruby knowledge for many functions, and uses cooking references (cookbooks, recipes, knife to perform certain functions,
    etc)

Then there's the part about Openstack Support; meaning who has the most modules for supporting deployments via Openstack, and creating Openstack clusters with the solution itself, the order is as follows:

  1. Ansible: Which is most supported by the Openstack Governance and has two massive projects provided by Ansible:
  2. Puppet: Which is supported by RDO and integrates well with ForeMan to provide Openstack Deployments
  3. Chef: For its Openstack modules/support
  4. Salt: Which doesn’t have great Openstack Module support, and doesn’t have many projects to deploy “vanilla” Openstack Deployments.

Is Ansible the perfect "be-all, end-all"? Of course not. Ansible does seem to treat Openstack as a "first class citizen" along with Puppet, but it seems to beat everyone in terms of general user ease of adoption.

NOTE: One way to see these results on which tools are the most used by the Openstack project for yourself (which is publicly view-able on GitHub), go to https://github.com/openstack and filter the repositories for "Ansible", "Puppet", "Chef" and "Salt" to see what is actually being built via these automation tools. This spoke volumes to me, when I was trying to find the right tool.

Openstack Deployment Options

There are a bunch of options to describe here. Pick yours below.

Manual: Openstack AIO in Less Than 20 Commands

So you're tired of the background and you just want the deployment? Fair enough. Let's get to it.

Install a base Ubuntu 14.04 LTE environment (VM or bare-metal) on your server. Make sure you at least have OpenSSH installed.

Steps 1 and 2 Make some basic host preparations. Update your hosts file for this machine hosts, and any others that may need to be defined (minimal is fine), and make sure that your host has a correct DNS configuration.

    ubuntu@megatron:~$ sudo vi /etc/hosts
    ubuntu@megatron:~$ sudo vi /etc/resolv.conf

Steps 3 - 7 Next, update the host, install and start NTP, configure NTP for your correct timezone, and then install prerequisite packages that the Openstack-Ansible project will require. NOTE: If IPv6 is timing out for your host, then you will need to add '-o Acquire::ForceIPv4=true' at the end of every single command (this means just before each '&&').

    ubuntu@megatron:~$ sudo apt-get update -y && sudo apt-get upgrade -y && sudo apt-get install -y ntp
    ubuntu@megatron:~$ sudo service ntp reload && sudo service ntp restart
    ubuntu@megatron:~$ echo "America/New_York" | sudo tee /etc/timezone
    ubuntu@megatron:~$ sudo dpkg-reconfigure --frontend noninteractive tzdata
    ubuntu@megatron:~$ sudo apt-get install -y python-pip git bridge-utils debootstrap ifenslave ifenslave-2.6 lsof lvm2 ntp ntpdate openssh-server sudo tcpdump vlan

Steps 8 - 11 THIS NEXT SECTION IS FOR ADDING VLAN INTERFACES TO YOUR HOST. IF YOU DON'T NEED THIS SUPPORT, OR IF YOU'RE UNSURE, SKIP IT! Next, you will NEED to be root to make the following changes (simply sudo as a privileged user will not work)! Add the following lines to /etc/modules, and then you must reboot.

    ubuntu@megatron:~$ sudo su -
    ubuntu@megatron:~$ echo 'bonding' >> /etc/modules
    ubuntu@megatron:~$ echo '8021q' >> /etc/modules
    ubuntu@megatron:~$ sudo reboot

Steps 12 - 17 Finally, run the Openstack-Ansible specific commands (once your machine is back online), and you'll be rockin' Openstack (in about 30-60 minutes, depending on your machine, VM, etc). NOTE: means either icehouse, juno, kilo, liberty, etc.

    ubuntu@megatron:~$ sudo su -
    ubuntu@megatron:~$ git clone -b <TAG> https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible
    ubuntu@megatron:~$ cd /opt/openstack-ansible
    ubuntu@megatron:/opt/openstack-ansible$ scripts/bootstrap-ansible.sh
    ubuntu@megatron:/opt/openstack-ansible$ scripts/bootstrap-aio.sh
    ubuntu@megatron:/opt/openstack-ansible$ scripts/run-playbooks.sh

Once you've run scripts/run-playbooks.sh the entire process will take anywhere from 40-120 minutes to complete. So I would recommend going to get a coffee, or continue reading below.

NOTE: The first thing you're probably going to do is log into Horizon. To view all of the randomly generated passwords, refer to the file /etc/Openstack_deploy/user_secrets.yml

Sidenote: Selecting Another Version to Deploy

Maybe you want to use a different version of Openstack? This project is perfect for that, and you can do it by selecting a different branch before you deploy (Step 13). Openstack-Ansible is intended to be completely customizable, even down to the upstream project repositories.

Make sure you're in the directory /opt/openstack-ansible/ before reviewing or checking out a new branch.

    ubuntu@megatron:/opt/openstack-ansible$ git branch -a
    * liberty
    remotes/origin/HEAD -> origin/master
    remotes/origin/icehouse
    remotes/origin/juno
    remotes/origin/kilo
    remotes/origin/liberty
    remotes/origin/master
    ubuntu@megatron:/opt/openstack-ansible$

You can also select a specific tag if you want a specific sub-version within the branch. Here is a list of useable/selectable tag options:

Tags and Branches

Sidenote: Selecting Another Version to Deploy

If you want more information about your current branch, use the -v flag.

    root@megatron:/opt/openstack-ansible# git branch -v
    * liberty 3bfe3ec Merge "Provide logrotate config for rsyncd on Swift storage hosts" into liberty
     root@megatron:/opt/openstack-ansible#
Cloud-Init: Cloud in a Cloud (AWS, Azure, Openstack)

So you want the Openstack, AWS, Azure version? Use a single Cloud-Init configuration.

#cloud-config
apt_mirror: http://mirror.rackspace.com/ubuntu/
package_upgrade: true
packages:
- git-core
runcmd:
- export ANSIBLE_FORCE_COLOR=true
- export PYTHONUNBUFFERED=1
- export REPO=https://github.com/openstack/openstack-ansible
- export BRANCH=liberty
- git clone -b ${BRANCH} ${REPO} /opt/openstack-ansible
- cd /opt/openstack-ansible && scripts/bootstrap-ansible.sh
- cd /opt/openstack-ansible && scripts/bootstrap-aio.sh
- cd /opt/openstack-ansible && scripts/run-playbooks.sh
output: { all: '| tee -a /var/log/cloud-init-output.log' }

That's it!

Ansible Tower: Fully Customizable and Manageable

This is a project that I'm currently working on, and Tyler Cross from Ansible has been an amazing resource in getting me started. When I first started using Ansible, I was curious about Ansible Tower. Tower is a management and orchestration platform for Ansible. The Openstack-Ansible project lends itself really well for Ansible Tower because of the deployment flow; they're using variables nearly everywhere throughout. That allows us to build on top of the upstream projects, supports backwards compatibility with older repository deployments, and it allows us to completely customize [nearly] anything about our deployment. This will take some work, but this is all possible because of their amazing Ansible practices!

If you would like to contribute to this effort, please send me an email! I am still learning my way through Ansible and Ansible Tower, but if there is something that you would like to be implemented as a customizable variable don't hesitate to ask me for contribution access.

Here is the link to the project: Openstack-Ansible Deployment using Ansible Tower

Heat: Openstack for Openstack Environments

There are heat orchestration deployment options too, and I will come back to document this later.

Simple: Single Command Deployment

There is also a single command to deploy an AIO instances (without Cloud-Config) to get you started. It's really this simple; are you convinced yet?

curl https://raw.githubusercontent.com/openstack/openstack-ansible/liberty/scripts/run-aio-build.sh | sudo bash

Mind blown, right? So let's start talking about how we support his environment, and get into the guts of the post-deployment tasks.

Exploring the Environment

You've deployed the beast successfully, but now it's time to understand what you deployed.

The Containers

It's time for us to look around, and see if we like this deployment. The first thing you're going to notice, is that there are...containers? That's right, this whole deployment is scalable at a production level and designed to use containers as the unit of scale. You can choose not to use containers, but there's really no reason to deploy via native services. The container framework not only works, but it works well!

JINKITOSXLT01:~ bjozsa$ ssh megatron
bjozsa@megatron's password:
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.19.0-25-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Wed Feb  3 10:48:05 EST 2016

  System load:     0.29               IP address for p4p1:       192.168.1.25
  Usage of /:      62.6% of 78.28GB   IP address for br-mgmt:    172.29.236.100
  Memory usage:    29%                IP address for br-storage: 172.29.244.100
  Swap usage:      0%                 IP address for br-vlan:    172.29.248.100
  Processes:       876                IP address for br-vxlan:   172.29.240.100
  Users logged in: 0                  IP address for lxcbr0:     10.255.255.1

  => There are 2 zombie processes.

  Graph this data and manage this system at:
    https://landscape.canonical.com/

7 packages can be updated.
7 updates are security updates.

Last login: Wed Feb  3 10:48:06 2016 from 192.168.1.180
bjozsa@megatron:~$ sudo su -
[sudo] password for bjozsa:
root@megatron:~# lxc-ls -f
NAME                                          STATE    IPV4                                           IPV6  AUTOSTART
-----------------------------------------------------------------------------------------------------------------------------------
aio1_aodh_container-72d3f185                  RUNNING  10.255.255.240, 172.29.238.111                 -     YES (onboot, openstack)
aio1_ceilometer_api_container-328e928e        RUNNING  10.255.255.154, 172.29.239.6                   -     YES (onboot, openstack)
aio1_ceilometer_collector_container-7007c54c  RUNNING  10.255.255.252, 172.29.237.136                 -     YES (onboot, openstack)
aio1_cinder_api_container-501ec49f            RUNNING  10.255.255.215, 172.29.236.192, 172.29.246.87  -     YES (onboot, openstack)
aio1_cinder_scheduler_container-e3abc1c0      RUNNING  10.255.255.248, 172.29.239.68                  -     YES (onboot, openstack)
aio1_galera_container-34abdcf1                RUNNING  10.255.255.112, 172.29.239.130                 -     YES (onboot, openstack)
aio1_galera_container-6cdcf3b0                RUNNING  10.255.255.121, 172.29.236.212                 -     YES (onboot, openstack)
aio1_galera_container-c5482364                RUNNING  10.255.255.181, 172.29.237.242                 -     YES (onboot, openstack)
aio1_glance_container-b038e088                RUNNING  10.255.255.15, 172.29.236.79, 172.29.245.107   -     YES (onboot, openstack)
aio1_heat_apis_container-b2ae0207             RUNNING  10.255.255.245, 172.29.238.154                 -     YES (onboot, openstack)
aio1_heat_engine_container-66b8dcd0           RUNNING  10.255.255.178, 172.29.237.64                  -     YES (onboot, openstack)
aio1_horizon_container-41a63229               RUNNING  10.255.255.172, 172.29.237.139                 -     YES (onboot, openstack)
aio1_horizon_container-84e57665               RUNNING  10.255.255.134, 172.29.237.102                 -     YES (onboot, openstack)
aio1_keystone_container-3343a7c4              RUNNING  10.255.255.200, 172.29.237.65                  -     YES (onboot, openstack)
aio1_keystone_container-f6d0fe97              RUNNING  10.255.255.142, 172.29.238.230                 -     YES (onboot, openstack)
aio1_memcached_container-354ea762             RUNNING  10.255.255.177, 172.29.236.213                 -     YES (onboot, openstack)
aio1_neutron_agents_container-9200183f        RUNNING  10.255.255.73, 172.29.237.208, 172.29.242.179  -     YES (onboot, openstack)
aio1_neutron_server_container-b217eee3        RUNNING  10.255.255.30, 172.29.237.222                  -     YES (onboot, openstack)
aio1_nova_api_metadata_container-5344e63a     RUNNING  10.255.255.161, 172.29.236.178                 -     YES (onboot, openstack)
aio1_nova_api_os_compute_container-8b471ec2   RUNNING  10.255.255.80, 172.29.239.238                  -     YES (onboot, openstack)
aio1_nova_cert_container-7a3b2fdc             RUNNING  10.255.255.126, 172.29.236.54                  -     YES (onboot, openstack)
aio1_nova_conductor_container-6acd6a76        RUNNING  10.255.255.65, 172.29.239.80                   -     YES (onboot, openstack)
aio1_nova_console_container-a8b545e4          RUNNING  10.255.255.251, 172.29.238.13                  -     YES (onboot, openstack)
aio1_nova_scheduler_container-402c7f54        RUNNING  10.255.255.253, 172.29.237.74                  -     YES (onboot, openstack)
aio1_rabbit_mq_container-80f2ac43             RUNNING  10.255.255.159, 172.29.239.200                 -     YES (onboot, openstack)
aio1_rabbit_mq_container-8194fb70             RUNNING  10.255.255.4, 172.29.238.146                   -     YES (onboot, openstack)
aio1_rabbit_mq_container-f749998a             RUNNING  10.255.255.36, 172.29.238.131                  -     YES (onboot, openstack)
aio1_repo_container-27d433aa                  RUNNING  10.255.255.89, 172.29.237.156                  -     YES (onboot, openstack)
aio1_repo_container-2d99ae62                  RUNNING  10.255.255.224, 172.29.238.71                  -     YES (onboot, openstack)
aio1_rsyslog_container-1fa56f87               RUNNING  10.255.255.52, 172.29.236.243                  -     YES (onboot, openstack)
aio1_swift_proxy_container-ff484b5c           RUNNING  10.255.255.11, 172.29.238.210, 172.29.247.147  -     YES (onboot, openstack)
aio1_utility_container-18aff51d               RUNNING  10.255.255.186, 172.29.237.14                  -     YES (onboot, openstack)
root@megatron:~# lxc-attach -n aio1_utility_container-18aff51d
root@aio1_utility_container-18aff51d:~# source openstack/rc/openrc
Please enter your OpenStack Password:
root@aio1_utility_container-18aff51d:~# openstack flavor list
+-----+----------------+-------+------+-----------+-------+-----------+
| ID  | Name           |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+-----+----------------+-------+------+-----------+-------+-----------+
| 01  | J1.MIC.5M.10XG |   512 |   10 |         0 |     1 | True      |
| 02  | J1.MAC.1G.20XG |  1024 |   20 |         0 |     1 | True      |
| 03  | J1.SML.2G.40XG |  2048 |   40 |         0 |     1 | True      |
| 04  | J1.MED.4G.100G |  4096 |  100 |         0 |     1 | True      |
| 05  | J1.LRG.4G.125G |  4096 |  125 |         0 |     1 | True      |
| 06  | J1.XLG.8G.150G |  8192 |  150 |         0 |     1 | True      |
| 07  | J2.MIC.2G.20XG |  2048 |   20 |         0 |     1 | True      |
| 08  | J2.MAC.4G.40XG |  4096 |   40 |         0 |     1 | True      |
| 09  | J2.SML.8G.80XG |  8192 |   80 |         0 |     1 | True      |
| 1   | m1.tiny        |   512 |    1 |         0 |     1 | True      |
| 10  | J2.MED.16.100G | 16384 |  100 |         0 |     1 | True      |
| 11  | J2.LRG.32.160G | 32768 |  160 |         0 |     2 | True      |
| 12  | J2.XLG.32.250G | 32768 |  250 |         0 |     2 | True      |
| 13  | J3.MIC.5M.40XG |   512 |   40 |         0 |     1 | True      |
| 14  | J3.MAC.1G.80XG |  1024 |   80 |         0 |     1 | True      |
| 15  | J3.SML.2G.100G |  2048 |  100 |         0 |     1 | True      |
| 16  | J3.MED.4G.150G |  4048 |  150 |         0 |     1 | True      |
| 17  | J3.LRG.8G.200G |  8192 |  200 |         0 |     1 | True      |
| 18  | J3.XLG.16.150G | 16384 |  150 |         0 |     1 | True      |
| 19  | J4.MIC.1G.10XG |  1024 |   10 |         0 |     1 | True      |
| 2   | m1.small       |  2048 |   20 |         0 |     1 | True      |
| 20  | J4.MAC.2G.20XG |  2048 |   20 |         0 |     1 | True      |
| 201 | tempest1       |   256 |    1 |         0 |     1 | True      |
| 202 | tempest2       |   512 |    1 |         0 |     1 | True      |
| 21  | J4.SML.4G.20XG |  4096 |   20 |         0 |     1 | True      |
| 22  | J4.MED.8G.40XG |  8192 |   40 |         0 |     2 | True      |
| 23  | J4.LRG.16.40XG | 16384 |   40 |         0 |     2 | True      |
| 24  | J4.XLG.32.40XG | 32768 |   40 |         0 |     4 | True      |
| 3   | m1.medium      |  4096 |   40 |         0 |     2 | True      |
| 4   | m1.large       |  8192 |   80 |         0 |     4 | True      |
| 5   | m1.xlarge      | 16384 |  160 |         0 |     8 | True      |
+-----+----------------+-------+------+-----------+-------+-----------+
root@aio1_utility_container-18aff51d:~#

As you can see, everything is running in containers!

The Ansible Groups and Container Names

Now we really want to look at the next most important thing for you to understand; the Ansible Groups. This is really important when you want to run tasks against your environment using Ansible (rather than doing things manually). If you want to use the containers manually, you can still do this! It's your time to waste, not mine. If automation of tasks is desirable to you, then this is something you'll want to understand better! Luckily, this project makes things so incredibly easy for you. Just navigate to the /opt/openstack-ansible/directory, and run the following script ./scripts/inventory-manage.py -G. An example of this is shown below.

    root@megatron:/opt/openstack-ansible# ./scripts/inventory-manage.py -G
     +--------------------------------+----------------------------------------------+
     | groups                         | container_name                               |
     +--------------------------------+----------------------------------------------+
     | aodh_container                 | aio1_aodh_container-38a780b7                 |
     | ceilometer_collector_container | aio1_ceilometer_collector_container-ed5bb27a |
     | utility_container              | aio1_utility_container-7b75ef4b              |
     | cinder_scheduler_container     | aio1_cinder_scheduler_container-69a98939     |
     | rsyslog                        | aio1_rsyslog_container-66ae2861              |
     | swift_proxy_container          | aio1_swift_proxy_container-c86ae522          |
     | nova_api_metadata              | aio1_nova_api_metadata_container-401c7599    |
     | neutron_server_container       | aio1_neutron_server_container-1ee5a4fd       |
     | nova_api_os_compute            | aio1_nova_api_os_compute_container-66728bd4  |
     | nova_cert                      | aio1_nova_cert_container-aabe52f6            |
     | pkg_repo                       | aio1_repo_container-4fc3fb96                 |
     |                                | aio1_repo_container-0ad31d6b                 |
     | neutron_agents_container       | aio1_neutron_agents_container-f16cc94c       |
     | nova_api_os_compute_container  | aio1_nova_api_os_compute_container-66728bd4  |
     | shared-infra_all               | aio1                                         |
     |                                | aio1_utility_container-7b75ef4b              |
     |                                | aio1_memcached_container-1fc8e6b0            |
     |                                | aio1_galera_container-382c8874               |
     |                                | aio1_galera_container-f93d66b1               |
     |                                | aio1_galera_container-397db625               |
     |                                | aio1_rabbit_mq_container-59b0ebdb            |
     |                                | aio1_rabbit_mq_container-a5ca3d38            |
     |                                | aio1_rabbit_mq_container-7a901a26            |
     | ceilometer_api_container       | aio1_ceilometer_api_container-658df495       |
     | nova_console_container         | aio1_nova_console_container-ffec93bd         |
     | aio1_containers                | aio1_nova_conductor_container-97d030c5       |
     |                                | aio1_aodh_container-38a780b7                 |
     |                                | aio1_ceilometer_collector_container-ed5bb27a |
     |                                | aio1_horizon_container-9472f844              |
     |                                | aio1_horizon_container-73488867              |
     |                                | aio1_utility_container-7b75ef4b              |
     |                                | aio1_keystone_container-a64a3cd3             |
     |                                | aio1_keystone_container-43955d1a             |
     |                                | aio1_cinder_scheduler_container-69a98939     |
     |                                | aio1_nova_cert_container-aabe52f6            |
     |                                | aio1_swift_proxy_container-c86ae522          |
     |                                | aio1_neutron_server_container-1ee5a4fd       |
     |                                | aio1_repo_container-4fc3fb96                 |
     |                                | aio1_repo_container-0ad31d6b                 |
     |                                | aio1_glance_container-da1bd1a8               |
     |                                | aio1_neutron_agents_container-f16cc94c       |
     |                                | aio1_nova_api_os_compute_container-66728bd4  |
     |                                | aio1_ceilometer_api_container-658df495       |
     |                                | aio1_nova_api_metadata_container-401c7599    |
     |                                | aio1_memcached_container-1fc8e6b0            |
     |                                | aio1_cinder_api_container-d397a5b0           |
     |                                | aio1_galera_container-382c8874               |
     |                                | aio1_galera_container-397db625               |
     |                                | aio1_galera_container-f93d66b1               |
     |                                | aio1_nova_scheduler_container-b9885d7e       |
     |                                | aio1_rsyslog_container-66ae2861              |
     |                                | aio1_rabbit_mq_container-59b0ebdb            |
     |                                | aio1_rabbit_mq_container-7a901a26            |
     |                                | aio1_rabbit_mq_container-a5ca3d38            |
     |                                | aio1_nova_console_container-ffec93bd         |
     |                                | aio1_heat_apis_container-cb9c8304            |
     |                                | aio1_heat_engine_container-4145b1be          |
     | neutron_server                 | aio1_neutron_server_container-1ee5a4fd       |
     | swift-proxy_all                | aio1                                         |
     |                                | aio1_swift_proxy_container-c86ae522          |
     | rabbitmq                       | aio1_rabbit_mq_container-59b0ebdb            |
     |                                | aio1_rabbit_mq_container-a5ca3d38            |
     |                                | aio1_rabbit_mq_container-7a901a26            |
     | heat_api_cfn                   | aio1_heat_apis_container-cb9c8304            |
     | nova_scheduler_container       | aio1_nova_scheduler_container-b9885d7e       |
     | cinder_api                     | aio1_cinder_api_container-d397a5b0           |
     | metering-alarm_all             | aio1_aodh_container-38a780b7                 |
     |                                | aio1                                         |
     | neutron_metadata_agent         | aio1_neutron_agents_container-f16cc94c       |
     | keystone                       | aio1_keystone_container-a64a3cd3             |
     |                                | aio1_keystone_container-43955d1a             |
     | nova_api_metadata_container    | aio1_nova_api_metadata_container-401c7599    |
     | ceilometer_agent_notification  | aio1_ceilometer_api_container-658df495       |
     | memcached                      | aio1_memcached_container-1fc8e6b0            |
     | nova_conductor_container       | aio1_nova_conductor_container-97d030c5       |
     | aodh_api                       | aio1_aodh_container-38a780b7                 |
     | nova_conductor                 | aio1_nova_conductor_container-97d030c5       |
     | neutron_metering_agent         | aio1_neutron_agents_container-f16cc94c       |
     | horizon                        | aio1_horizon_container-73488867              |
     |                                | aio1_horizon_container-9472f844              |
     | os-infra_all                   | aio1_nova_conductor_container-97d030c5       |
     |                                | aio1                                         |
     |                                | aio1_horizon_container-73488867              |
     |                                | aio1_horizon_container-9472f844              |
     |                                | aio1_nova_cert_container-aabe52f6            |
     |                                | aio1_glance_container-da1bd1a8               |
     |                                | aio1_nova_api_os_compute_container-66728bd4  |
     |                                | aio1_nova_api_metadata_container-401c7599    |
     |                                | aio1_nova_scheduler_container-b9885d7e       |
     |                                | aio1_nova_console_container-ffec93bd         |
     |                                | aio1_heat_apis_container-cb9c8304            |
     |                                | aio1_heat_engine_container-4145b1be          |
     | repo_container                 | aio1_repo_container-4fc3fb96                 |
     |                                | aio1_repo_container-0ad31d6b                 |
     | identity_all                   | aio1                                         |
     |                                | aio1_keystone_container-a64a3cd3             |
     |                                | aio1_keystone_container-43955d1a             |
     | keystone_container             | aio1_keystone_container-a64a3cd3             |
     |                                | aio1_keystone_container-43955d1a             |
     | swift_proxy                    | aio1_swift_proxy_container-c86ae522          |
     | nova_cert_container            | aio1_nova_cert_container-aabe52f6            |
     | nova_console                   | aio1_nova_console_container-ffec93bd         |
     | aodh_alarm_notifier            | aio1_aodh_container-38a780b7                 |
     | utility                        | aio1_utility_container-7b75ef4b              |
     | glance_container               | aio1_glance_container-da1bd1a8               |
     | log_all                        | aio1_rsyslog_container-66ae2861              |
     |                                | aio1                                         |
     | memcached_container            | aio1_memcached_container-1fc8e6b0            |
     | cinder_api_container           | aio1_cinder_api_container-d397a5b0           |
     | aodh_alarm_evaluator           | aio1_aodh_container-38a780b7                 |
     | neutron_l3_agent               | aio1_neutron_agents_container-f16cc94c       |
     | ceilometer_collector           | aio1_ceilometer_collector_container-ed5bb27a |
     | rabbit_mq_container            | aio1_rabbit_mq_container-59b0ebdb            |
     |                                | aio1_rabbit_mq_container-7a901a26            |
     |                                | aio1_rabbit_mq_container-a5ca3d38            |
     | heat_api_cloudwatch            | aio1_heat_apis_container-cb9c8304            |
     | aodh_listener                  | aio1_aodh_container-38a780b7                 |
     | metering-infra_all             | aio1_ceilometer_collector_container-ed5bb27a |
     |                                | aio1                                         |
     |                                | aio1_ceilometer_api_container-658df495       |
     | heat_engine_container          | aio1_heat_engine_container-4145b1be          |
     | storage-infra_all              | aio1                                         |
     |                                | aio1_cinder_scheduler_container-69a98939     |
     |                                | aio1_cinder_api_container-d397a5b0           |
     | galera                         | aio1_galera_container-382c8874               |
     |                                | aio1_galera_container-f93d66b1               |
     |                                | aio1_galera_container-397db625               |
     | horizon_container              | aio1_horizon_container-9472f844              |
     |                                | aio1_horizon_container-73488867              |
     | neutron_agent                  | aio1_neutron_agents_container-f16cc94c       |
     | neutron_lbaas_agent            | aio1_neutron_agents_container-f16cc94c       |
     | heat_api                       | aio1_heat_apis_container-cb9c8304            |
     | glance_registry                | aio1_glance_container-da1bd1a8               |
     | ceilometer_agent_central       | aio1_ceilometer_api_container-658df495       |
     | galera_container               | aio1_galera_container-382c8874               |
     |                                | aio1_galera_container-397db625               |
     |                                | aio1_galera_container-f93d66b1               |
     | network_all                    | aio1                                         |
     |                                | aio1_neutron_server_container-1ee5a4fd       |
     |                                | aio1_neutron_agents_container-f16cc94c       |
     | glance_api                     | aio1_glance_container-da1bd1a8               |
     | neutron_dhcp_agent             | aio1_neutron_agents_container-f16cc94c       |
     | repo-infra_all                 | aio1_repo_container-4fc3fb96                 |
     |                                | aio1                                         |
     |                                | aio1_repo_container-0ad31d6b                 |
     | neutron_linuxbridge_agent      | aio1_neutron_agents_container-f16cc94c       |
     |                                | aio1                                         |
     | heat_engine                    | aio1_heat_engine_container-4145b1be          |
     | cinder_scheduler               | aio1_cinder_scheduler_container-69a98939     |
     | nova_scheduler                 | aio1_nova_scheduler_container-b9885d7e       |
     | ceilometer_api                 | aio1_ceilometer_api_container-658df495       |
     | rsyslog_container              | aio1_rsyslog_container-66ae2861              |
     | heat_apis_container            | aio1_heat_apis_container-cb9c8304            |
     +--------------------------------+----------------------------------------------+
     root@megatron:/opt/openstack-ansible#

You can see, there are a lot of groups! I create a custom group for my own uses, and I'll explain this better in the Operations section below. For now, I want to tell you more about Openstack services.

Openstack Services

Openstack has a lot of services to keep up with, and adding a lot of containers, groups and other management responsibility may not see to help a whole lot. I can assure you that this has been made easy too.

What we're going to do, is cat and grep out a file, to figure out what services may be running on a particular Openstack node-type.

First, what we're going to do is navigate to the same /opt/openstack-ansible directory we use all the time (you should start seeing a pattern here). Next, we want to list out the contents of the directory /opt/openstack-ansible/playbooks/roles/, and grep for anything containing os_.

    root@megatron:/opt/openstack-ansible# ls -las playbooks/roles/ | grep os_
     4 drwxr-xr-x  7 root root 4096 Feb  6 20:34 os_aodh
     4 drwxr-xr-x  8 root root 4096 Feb  6 20:34 os_ceilometer
     4 drwxr-xr-x  8 root root 4096 Feb  6 20:34 os_cinder
     4 drwxr-xr-x  7 root root 4096 Feb  6 20:34 os_glance
     4 drwxr-xr-x  7 root root 4096 Feb  6 20:34 os_heat
     4 drwxr-xr-x  7 root root 4096 Feb  6 20:34 os_horizon
     4 drwxr-xr-x  9 root root 4096 Feb  6 20:34 os_keystone
     4 drwxr-xr-x  9 root root 4096 Feb  6 20:34 os_neutron
     4 drwxr-xr-x  8 root root 4096 Feb  6 20:34 os_nova
     4 drwxr-xr-x  7 root root 4096 Feb  6 20:34 os_swift
     4 drwxr-xr-x  6 root root 4096 Feb  6 20:34 os_swift_sync
     4 drwxr-xr-x  6 root root 4096 Feb  6 20:34 os_tempest
     root@megatron:/opt/openstack-ansible#

Great! Now we have our Openstack node-types. Next, we want to grep the contents of /opt/openstack-ansible/playbooks/roles/os_<nodetype>/defaults/main.yml, like shown below.

    root@megatron:/opt/openstack-ansible# cat playbooks/roles/os_nova/defaults/main.yml | grep ": nova-"
     nova_program_name: nova-api-os-compute
     nova_spice_program_name: nova-spicehtml5proxy
     nova_novncproxy_program_name: nova-novncproxy
     nova_metadata_program_name: nova-api-metadata
     nova_cert_program_name: nova-cert
     nova_compute_program_name: nova-compute
     nova_conductor_program_name: nova-conductor
     nova_consoleauth_program_name: nova-consoleauth
     nova_scheduler_program_name: nova-scheduler
root@megatron:/opt/openstack-ansible#

Notice the format closely: cat playbooks/roles/os_<nodetype>/defaults/main.yml | grep ": <nodetype>-" even down to the hyphen, because that's really important.So what we're concerned with are the following nova services.

    nova-api-os-compute
    nova-spicehtml5proxy
    nova-novncproxy
    nova-api-metadata
    nova-cert
    nova-compute
    nova-conductor
    nova-consoleauth
    nova-scheduler

So if you ever need to restart <nodetype>-<service>, you can connect to the aio1_<nodetype>_<service> and perform a service <nodetype>-<service> start|stop|restart. Better yet, we can do this the Ansible way, which [again] is listed below.

Upgrading the Environment

When you're ready to upgrade the environment (to the latest minor versions), perform the following steps.

First, you will want/need to update/synchronize your local repositories with any changes upstream. Make sure to do this in the /opt/openstack-ansible/ directory.

    root@megatron:/opt/openstack-ansible# git fetch --all
    Fetching origin
    remote: Counting objects: 421, done.
    remote: Total 421 (delta 287), reused 288 (delta 287), pack-reused 133
    Receiving objects: 100% (421/421), 57.43 KiB | 0 bytes/s, done.
    Resolving deltas: 100% (300/300), completed with 133 local objects.
    From https://github.com/openstack/openstack-ansible
     dca9d86..67ddf87 liberty -> origin/liberty
     4c8bba8..3770bb5 kilo -> origin/kilo
     cb007b0..191f4c3 master -> origin/master
     * [new tag] 11.2.9 -> 11.2.9
     * [new tag] 12.0.6 -> 12.0.6
    root@megatron:/opt/openstack-ansible#

After this has completed, you'll see that two branches were updated (in this case Kilo = 11.2.9 and Liberty 12.0.6). What we will need to do is 'check out' the updated branch. NOTE: Updates can still exist within the same TAG, and you will know this when you see a {{* [new tag]}} indicator.

     root@megatron:/opt/openstack-ansible# git checkout 12.0.6
     Note: checking out '12.0.6'.
     You are in 'detached HEAD' state. You can look around, make experimental
     changes and commit them, and you can discard any commits you make in this
     state without impacting any branches by performing another checkout.
     If you want to create a new branch to retain commits you create, you may
     do so (now or later) by using -b with the checkout command again. Example:
     git checkout -b new_branch_name
     HEAD is now at 972b41a... Merge "Update Defcore test list function" into liberty
     root@megatron:/opt/openstack-ansible#

Next, update RabbitMQ:

     root@megatron:/opt/openstack-ansible# openstack-ansible -e rabbitmq_upgrade=true \
       rabbitmq-install.yml

Next, update the Utility Container:

     root@megatron:/opt/openstack-ansible# openstack-ansible -e pip_install_options="--force-reinstall" \
       utility-install.yml

Finally, update all of the Openstack Services:

     root@megatron:/opt/openstack-ansible# openstack-ansible -e pip_install_options="--force-reinstall" \
        setup-openstack.yml

Make sure to check all of your services when you are done, to ensure that everything is running.

Operations and Advanced Topics

Adding Additional Ansible Roles to Openstack-Anisble

Adding additional Ansible roles is one of the best features about the Openstack-Ansible project! This is how to add/integrate Contrail/Opencontrail, Ironic, and other useful projects into the deployment. This can be done either at initial run, or by simply rerunning the Ansible build playbooks after your infrastructure is already initiated; either way is perfectly fine. To read more, follow the link to the documentation entitled: Extending Openstack-Anisble.

Instance Evacuation

Instances need to be available at all times. What if hardware issues start to arise. This is call "Instance Evacuation" and it is documented here: Openstack Instance Evacuation.

Live Migration

Live Migration is a feature similar to VMWare's vMotion. It allows you to actively transfer an instance from one compute node to another with zero downtime. Configuration changes are required, as I found that Mirantis disables these features in their Kilo release (NOTE: I need verify this wasn't an installation error). See below for further details.
Openstack Live Migration.

Mirantis:

    root@node-titanic-88:~# cat /etc/nova/nova.conf | grep live
    #live_migration_retry_count=30
    #live_migration_uri=qemu+tcp://%s/system
    #live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,
    VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED
    #live_migration_bandwidth=0
    root@node-titanic-88:~#

Openstack-Ansible:

    root@aio1_nova_api_os_compute_container-66728bd4:~# cat /etc/nova/nova.conf | grep live
     live_migration_flag = "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED"
     root@aio1_nova_api_os_compute_container-66728bd4:~#

RDO-Openstack:

    [root@galvatron ~]# cat /etc/nova/nova.conf | grep live
    live_migration_uri=qemu+tcp://nova@%s/system
    live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED
    #live_migration_bandwidth=0
    #disable_libvirt_livesnapshot=true
    [root@galvatron ~]#

I'll be back later to document more. For now, this is a pretty solid start though! Please email me if you have any questions or comments!

v1k0d3n / Brandon