In my first post, I looked into what is OpenStack and how, if done right, can be quite a powerful ally in our cloud deployment strategies. In this post, I want to start looking at how we can create an application to learn the basics and components of the system.
To do this, we first need a development environment that we can access and push to. This is where DevStack will come to help.
DevStack is a set of scripts that can "quickly" (15-20 minutes vs 2 hours) and easily deploy a new OpenStack cloud to your environment. The default settings provided allow you to create the default components to quickly get started, but configuration options can be added to customize your set up to meet your requirements.
By default, the environment doesn't contain all components that OpenStack provides. Instead, you're offered:
- Keystone (Identity Service)
- Glance (Image Service)
- Nova (Compute Service)
- Placement (Placement / HTTP API Service)
- Cinder (Block Storage)
- Neutron (Networking)
- Horizon (Dashboard Service)
OpenStack also provides a wealth of other components that can be used to provide additional resources. Some simplify infrastructure by providing IaaS solutions like Trove (Database as a Service) and Zaqar (Messaging/AMQP Service). We'll get into how we can customize our environment later as this requires creating some configuration files.
Before we even start installing, it's best to figure out where you want to install it. I'm always a fan of setting up a Virtual Machine for these purposes as it's easy to remove the whole environment if things get messy and start fresh or revert to a snapshot of a previous step in my set up. That decision is up to (but also not really... use a VM). If you are considering installing on your local machine, know of these 3 locations:
- Space for the DevStack Scripts and Configurations (I'm choosing
devstackin the home directory)
/opt/stackfor deployment configurations
/for the executables of the components that services OpenStack
NOTE: If you're using Windows (especially a version that doesn't support WSL2), you will be required to us a Virtual Machine since most of the components are built and tested against Ubuntu and other major distributions.
To ensure all steps will complete with fewest possible failures, I will set up a QEMU Virtual Machine using Ubuntu 20.04. I won't go through the steps to do this as this is well documented everywhere. Some notes that I recommend to use though would be:
- Memory: DevStack recommends 4GB but I would recommend more if you have the space.
- CPUs: 4 vCPUs are recommended, but you can push that to half of the vCPUs you have on your system.
- Disk: At least 100GB. I originally started with 30GB and quickly ran out of space since you'll need to load disk images and VM volumes within this space.
- Network: Use defaults, 1 interface.
- Packages: OpenSSH Server (recommended). This will make interacting with the system more efficient, but can still do all necessary steps via the VM's console window.
Access the VM's console or SSH into your new environment and run the following commands to make sure your environment is up to date:
sudo apt-get update sudo apt-get upgrade sudo apt-get dist-upgrade sudo reboot
After the system has rebooted, log back in and from your user's home directory, run the commands below to download and install the DevStack tools.
NOTE: As the DevStack (and all other components) use a branch-release model, you'll need to specify which release you want to install. At the time of writing, "Wallaby" is the stable release ("Xena" was also recently released). For this reason, we'll checkout the branch
stable/wallabyfor this exercise
The first step to setting up DevStack will be to clone the repository locally on the VM.
sudo apt-get install git git clone https://opendev.org/openstack/devstack -b stable/wallaby cd devstack
Next we'll need to configure how we want DevStack to deploy the environment. By doing this, we can start customizing our environment. This isn't necessary, but highly suggested. Setting the passwords also turns the process into a non-interactive process. Since we are customizing from default slightly by adding Swift, we also need to add some recommended settings for it's use. The other configurations are convenience options for our use.
cat > local.conf << EOF [[local|localrc]] ADMIN_PASSWORD=sUp3rSe(RE7 DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD DEST=/opt/stack API_RATE_LIMIT=False LOGDAYS=1 LOGFILE=$DEST/logs/stack.sh.log SWIFT_HASH=$(echo $RANDOM | md5sum | head -c 30; echo;) SWIFT_REPLICAS=1 enable_service s-account s-container s-object s-proxy EOF
If you are using a VM, this step is a good point to create a Snapshot of your VM as the next steps installs the VM. If you want to quickly re-create a new environment next time, revert back to this Snapshot and run the install script to have a fresh environment.
Once you're ready to start installing, we can call DevStack's
stack.sh command to have the downloaded tools to start setting up your environment by downloading all of the OpenStack components to your system and installing their services.
This step will start configuring your system with everything it requires for OpenStack to run.
Steps After a Reboot
Although there are steps to re-build your last environment, DevStack environments were not designed to persist. It's better to create a new environment.
The remaining of the installation WILL take some time, so maybe go get a snack or meal.
Once the installation is complete, you should now have access to a working OpenStack cloud environment. Open a browser to your VM's IP address and you'll be presented with the OpenStack Horizon's login page. Use the password you set in the
local.conf file for the
admin user to login.
With the completed installation, you can open your browser to your VM's IP address to get the Horizon Dashboard's login. Use the credentials provided to open up the Dashboard interface to your new cloud.
From the Dashboard, we can review some of the most important aspects of our new stack: Compute/Server Nodes, Storage Volumes, and Virtual Networks. Let's briefly review the areas of the stack and their functions starting from the bottom.
This is the component we told DevStack to install. We can use this section to define the containers that will be used by our applications.
For this review, we won't do much as we'll configure this later.
While for our first example, we won't be making many changes here, it's still a very important aspect to cover. The Networking component handles the virtual networks and traffic management within your applications. This is very important in cases where you want to set up multiple server nodes with the same process and have them load-balanced, or if you want to isolate some traffic while exposing others via another network. You can get into some really fancy configurations and the OpenStack Project Components page has some great examples of these.
All you need to know for now is the Networking component is further broken down into 4 sections:
- Network Topology - Provides a visual reference of the available networks and how they relate to each other
- Networks - Manage available networks and their definitions (via subnets)
- Routers - Manage logical routers and define rules for their operations including port forwarding, and the networks and subnets they act on.
- Security Groups - Can be thought of as a virtual firewall for servers and resources within a project.
- Floating IPs - Allows specifying a static IP address to provide port-forwarding capabilities which resources can be assigned.
We will be covering Networking in a bit more detail in the next post.
This is your Block Storage component of the cloud. Any time you want to hold onto a disk image of your VM or want to spin up a disk resource for deploying applications, this is where you go. Block Storage is much simpler to understand than the other components but nothing would work without it.
The parts of this area include:
- Volumes - These are the disk images that you can work with. They can be assigned to a server node or may be floating.
- Snapshots - These are what you think they are, snapshots of any given volume at the moment the snapshot is taken. This is great for critical environments that would cause problems if the disk image became corrupt, you could restore the snapshot to re-start your server to recover what you can or prevent the issue from occurring.
- Groups - This expands on the idea of Snapshots where you can define groups of Volumes that, when you specify or at scheduled intervals, can create snapshots for that moment in time. This area only defines the groups definitions themselves, while...
- Group Snapshots - Contains the snapshots collected from the Groups definitions.
Servers created in the Compute component can easily be thought of as Virtual Machines/Servers and are the backbone of the whole operations. Any time you want to deploy an instance of an application, it will often be in one of these nodes. Want a self-built Web API Service, deployed here. Want a traffic proxy with authentication, deployed here. Want a PostgreSQL/MongoDB/Redis/other db, a commercial server-side backend service, they would all be deployed here. These servers are miniaturized servers that focus on one or few tasks for the larger application as a whole, much in the same way that Docker or Kubernetes would deploy any instance.
This is slightly different to other cloud platforms (like Google Cloud, Amazon, Azure, etc...) wherein they will spin up a "compute" node of just the code you want to run. What's actually happening in the background is they are still spinning up a generic VM instance (or Docker Container) with dedicated memory and network presence for that bit of code.
As there's a bit more complexity to what can be done with these instances, there's also a bit more involved with the dashboard environment as well.
- Overview - Performance dashboard of all nodes within your project. This also includes metrics from other components like Network and Volumes.
- Instances - Heart and soul of this component. This is the configured server nodes created for your project.
- Images - Much like Docker Hub images, or pre-built base ISO images, this is the base images that you can refer to for deploying new Instances.
- Key Pairs - The SSH Public keys or certificates you want to insert in your new images (as long as you don't create a custom configuration). Very helpful if you want to quickly start a new instance using an OS with a default user account and no password. This will still allow you to SSH into that instance.
- Server Groups - This section defines dynamic rules that groups servers automatically while also defining policies these servers will abide by. At this point, I won't get into policies as a brief look at the documentation had me starring down a rabbit hole I know would take me too long to get out of.
To finish off this post, let's create a blank image server which we can log into and interact with it's command line. In the process, we'll use an existing Image to create a new Volume. Luckily, OpenStack does a great job at making this easy through the dashboard.
- From the dashboard, we'll go to the Compute section and select Instances.
- Select the "Launch Instance" button
- From here, we need to fill in multiple sections to get this running. Starting in the Details tab, set the Instance Name to
- Next in the Source tab, set the select the
cirros-image. Cirros is the base image used by default for OpenStack, but we'll look at loading our own images later. To select the image, click on the up arrow next to the item to move it into the Allocated group.
- In the Flavor tab you will find the instance configuration that will provision your instance. Notice how many of these are similar to what you see on other popular cloud platforms. Choose the one you want, I'll go with
- Lastly, in the Networks tab, we need to specify which network this server will be connected. For this exercise, let's choose the
After selecting the network, click on
Launch Instance. This will start a scheduled task to create the VM and create the disk volume using the Cirros image as it's base.
From here, you can connect to the new instance's console.
- Click on the new instance's name,
- In the instance's detail window, select the Console tab to open the instance's remote terminal console.
- You should see the instance console pop up with the username and password you should use. Log in with those credentials
You should now have a console to a fully operating server.
We've covered a lot of topics here, but we now have a working VM hosted within our OpenStack deployment. In the process:
- Took a brief look at the development environment of OpenStack
- Reviewed some of the components that normally comes with it
- Toured around the dashboard component, Horizon
- Reviewed the managed components from the Dashboard including Networking (Neutron), Block Storage (Cinder), and Compute Nodes (Nova)
- Created our first instance
As mentioned in my introductory post, this is as much a learning exercise for myself and try to make what I've learned accessible to anyone else. If you're familiar OpenStack and have some insights that could help this documentation, please feel free to reach out!