September 13, 2014
by Dmitri Zimine
This post originally appeared September 12, 2014 on OpenSource.com.
In the cloud world, the mantra is “automate everything.” It’s no surprise that as OpenStack expands its scope, automation projects are emerging within it. But, the variety and the sheer number of these projects is still surprising: there are over twenty!
This is the first part in a series of three articles surveying automation projects within OpenStack, explaining what they do, how they do it, and where they stand in development readiness and field usage. Some of these projects, like Mistral for workflow as a service (full disclosure: I help drive this project as CTO of StackStorm) and Compass for provisioning (from Huawei), are intended to help with non-OpenStack environments as well.
My goal in this series is to give a high level map, trigger your curiosity, and to give you pointers to dig for more details.
First, let’s clarify what it means to be “within OpenStack.” A project typically moves from “related” to “incubated” to “integrated.” In all cases, irrespective of which stage of acceptance they have achieved, OpenStack projects are managed in a similar way. For example, in every case for the projects I’m going to review in this survey: the code is in Python, hosted on StackForge, and the code itself follows OpenStack structure and conventions. In addition, the commit review processes all have in common Gerrit/Jenkins/Zulu, and all of these projects include Tempest integration and DevStack integration. Moreover, project management is done on Launchpad, the docs are on a wiki, and open communication is achieved via the openstack-dev mailing list and more. The bottom line is that you know an OpenStack project when you see it.
Here are the automation projects that I consider OpenStack automation projects. I split the projects in three categories and review them in turn. Today, in part one, I cover cloud deployment tools that enable you to install/update OpenStack cloud on bare metal. In future articles, I will examine the automation of workload deployment—provisioning virtual machines, groups of VMs, and/or applications, and automating “day 2 management”—tools to keep the cloud and workloads up and running.
Cloud deployment tools
With no further adieu, let’s look at cloud deployment tools. Cloud deployment tools deal with provisioning the components of OpenStack—building an OpenStack cloud. Not surprisingly, these tools tend to be relatively mature and broadly used since the first thing that needs to be automated is often the deployment of OpenStack itself.
Fuel
“The control plane for installing and managing OpenStack.”
Originally Mirantis’ proprietary solution, Fuel is now open source and contributed to OpenStack. An orchestration layer on top of Puppet, MCollective, and Cobbler, Fuel codifies Mirantis’ best practices of OpenStack deployment. Like other tools in this category, it does hardware discovery, network verification, OS provisioning and deploying of OpenStack components. Fuel’s distinct feature is a polished and easy to use Web UI that makes OpenStack installation seem simple.
First released in 2013, it is now OpenStack “related” project. We have seen Fuel in the field a lot. OpenStack newbies often choose Fuel in their proof of concepts, attracted by the ease of use to get their cloud up and running. Also, Mirantis’ consultants brought Fuel into some large production deployments. And now it is a part of Mirantis’ OpenStack distribution which is one of the leading several such distros available. However because Fuel is only “related,” it is not fully upstream as would be an integrated project. Therefore you will likely not find Fuel in non-Mirantis distros or in the OpenStack source itself.
Compass
“Compass is an open source project designed to provide “deployment as a service” to a set of bare metal machines.”
Yet another OpenStack deployment tool, Compass was developed by Huawei for their specific needs and made to be open source as an OpenStack-related project in Jan 2014. Compass developers position it as a simple, extensible data-driven platform for deployment, and as not limited to OpenStack. Through the plugin layer, it leverages other tools for hardware discovery, OS and hypervisor deployment, and configuration management.
Compass is a “related” project. While it is apparently mature enough for internal Huawei use, we have not seen it running outside of Huawei even though it is positioned as being useful beyond just OpenStack.
TripleO
TripleO installs, upgrades, and operates OpenStack cloud using OpenStack own cloud facilities. Yes, “it takes OpenStack to deploy OpenStack.”
In essence, TripleO is a dedicated OpenStack installation, called “the under-cloud,” that is used to deploy other OpenStack clouds—”overclouds” on bare metal.
The desired over-cloud configuration is described in a Heat template, and the deployment orchestrated by Heat. The nodes are provisioned on bare metal using Nova bare metal (Ironic): it PXE-boots the machine and installs images with OpenStack components. The images are dynamically generated with disk-image builder from image elements.
Operators enjoy using familiar OpenStack tools: Keystone authentication, Horizon dashboard, and Nova CLI, deploying and operating OpenStack cloud on hardware just like they deploy and operate a virtual environment.
TripleO targets ultra-large scale deployments (they say small deployments are solved by other tools) to do continuous integration and deployment of multiple evolving OpenStack clouds.
TripleO is an officially “integrated” project. With the most traction in OpenStack community and support of HP, Red Hat, and substantial others, it has established itself as a way-to-go long-term. The readiness status of TripleO is puzzling: on the one hand, it is used by HP Helion. On the other, the wiki and documentation states that it is “functional, but still evolving.” I haven’t seen it deployed in production yet, this will likely change in Kilo cycle (Spring 2015).
Other tools
Summary
Automating OpenStack bare metal provisioning is a fairly well solved problem. In addition to OpenStack tools described above, there are many outside of our defined “OpenStack umbrella”, notably Crowbar, the first OpenStack specific deployment tool. The only challenge now is to select a tool of your liking from a bunch of apparently good ones. An excellent in depth comparison of the tools is availablehere.
If you go down the path of purchasing support for a distribution from a OpenStack distribution reseller—and there are many of them—they will very likely include such a solution in the distribution and will of course use that tool to deliver the deployment quickly and efficiently.
I don’t want to play favorites, but the scope and rapid progress of TripleO is particularly impressive.
It is still evolving, but the OpenStack community is converging around it with sometimes competitors like Red Hat and HP collaborating effectively. TripleO solves an important set of problems for operators that are serious about larger scale deployment. We expect to see it broadly used amongst our users, which tend to be larger private and public cloud operators, whether they are SaaS, enterprise or service providers.
Coming next: In part two, I’ll cover OpenStack projects for automating workload deployment. I welcome and really appreciate your feedback and comments below or on our Twitter account, @Stack_Storm. We are also hosting a meetup in the StackStorm offices to discuss OpenStack automation on October 14. Join us and register here.