DEV Community

DevGraph
DevGraph

Posted on • Originally published at devgraph.com

ARM-Based Cloud Computing: Inexpensive and Fast

By Ritu Chaturvedi

Many of you have heard about home automation with Raspberry Pi or that the latest smartphones are more clever than some desktops. You may have been wondering why tiny computers are not used on an industrial scale outside of the portable gear.

They are. The market for alternatively architectured computers is not restricted to devices for private use. Nowadays, it is possible to equip cloud computing facilities with such machines.

We are talking about ARM technology. You can use it for CI/CD on a corporate level but need to prepare your development routines for this transition.

An Introduction to ARM Machines

A Brief History of ARM

ARM is an acronym that has kept its meaning but changed the underlying abbreviated components. Originally, it stood for Acorn RISC Machine. Acorn Computers Ltd. used to be a British manufacturer of microcomputers founded back in 1978. In 1990, after a few years of cooperative experimental projects with Apple, one section of the company was separated and established as a new firm: Arm Ltd. 

Today, ARM stands for Advanced RISC Machines. 

Arm Ltd. seldom appears in the news. The reason is that Arm Ltd. mainly focuses on the development of the RISC architecture — we will come to that in a moment — and not on the end-user products. It sells licenses to other companies that manufacture computer processors and sell them to third parties or incorporate them into their own products. Raspberry Pis have ARM cores, as do iPads and a wide range of their tablet competitors.

By the way, the term “ARM core” is applied not only to CPUs manufactured under Arm Ltd.’s license. It can also be a Qualcomm computer chip that was designed independently.

What Is RISC?

RISC stands for reduced instruction set computer. To help you understand RISC and its hidden potential for cloud computing, we will directly introduce its counterpart: a complex instruction set computer called CISC. CISC computers are basically all personal computers and the biggest part of the data center hardware. In other words, the CISC architecture is what we know to be a usual computer. 

What Is the Difference?

As we know, a CPU is an electronic circuitry. Once a set of instructions specified in a software program is executed, an oscillating clock signal is issued. When you choose hardware for end-users, you usually look at the number accompanied by MHz, standing for megahertz. Hertz measures the number of clock cycles, which is how often the clock signal was issued within one second. We are accustomed to taking the higher number for the better.

This is not completely wrong. But CISC computers made the race for higher clock speed their main direction for development. The reason is that a CISC processor executes multiple instructions within only one clock cycle by merging them into a single instruction set. On the contrary, RISC processors execute only one instruction per cycle and, therefore, they break complex instructions down into simple instructions. RISC processors impose restrictions on the size of a single instruction but they do not hunt for the highest clock speed.

Here comes the main difference between the two architectures.

RISC architecture aims to use simple hardware and simple software that needs fewer clock cycles to execute. “Fewer” and “simpler” means that you need fewer transistors and, consequently, less electric power to run an ARM-based machine.

Unfortunately, an ARM-based processor needs more memory since simple instructions add more work to compilers. In the early days, manufacturing memory components was difficult and expensive. That’s why the evolution of end-user computers took a different path, and RISC architecture had to wait decades for a new rise.

RISC vs CISC: Further Advantages and Disadvantages

You can run complex applications on simple processors — ARM-based ones — but such a machine will require a larger memory cache. 

In addition to that, two disadvantages have evolved historically. They are not per se drawbacks of the ARM technology, but they are rather the impact of the computer industry following a non-RISC path for years.

First, RISC remained a less popular niche for programmers. As a result, nowadays, it is more difficult to find a good developer who can write applications for ARM-based computers. Second, creating an application for RISC architecture means more effort since you need to transform complex instructions into simpler ones. And most of the current software applications are made complex due to their usage with the complex CPUs.

But, in general, RISC architecture means cheaper and smaller hardware components that run faster and consume less energy. Once you’ve managed to adjust the legacy code to the new type of hardware, you can count on a severe cost reduction.

ARM and Cloud Computing

Arm Ltd. has willingly done its homework in the field of cloud computing and came up with two convincing arguments that ARM-based data centers can offer:

  • Single threading
  • Linear scalability

Single threading means that a CPU processes one thread at a time. Or that one thread means one CPU, with no shared cores that slow down performance. Since many web applications open one thread per request, and since most of them run in the cloud, ARM-based machines have good chances to replace their CISC rivals.

Scalability was previously a curve-shaped thing: in the beginning, it goes up, but then it slows down and grows in steps. An ARM-based server scales up in a linear manner: the pace remains the same and continues to climb with the permanent speed. You do not need to run complex predictions or prognoses.

Load/Store Architecture

Apart from this, ARM-based machines seem to have overcome the memory problem by integrating the load/store architecture. 

During any computing operation, the mainstream CISC architecture directly accesses the memory, takes the data from it, changes it, and stores the result.

RISC architecture separates memory access operations from computing operations, making each calculation a two-step instruction.

Small pieces of an ARM-based processor’s fast temporary memory are called registers. The stored instructions take data from a register and store it in the main one. The load operations do the opposite: access the main memory and move data into a register. 

The load/store architecture decreases memory access latency. 

The Future of Computing

RISC architecture can change the future of software development. A boom of ARM-based processors would promote applications that are close-to-the-metal and directly access RAM and hardware registers. These applications are programmed in low-level languages, such as the C family, that approximate the commands, making them intermittently understandable to the hardware they run on.

Besides, ARM processors offer a 40% better price and performance. The cost-saving factor makes them particularly attractive.

The Modern Market of ARM Hardware for Cloud Computers

We hope to have made you highly enthusiastic about ARM-based (cloud) computing. By now, you may be asking yourself, “How do I get this magic hardware?” For building your own ARM-based data center, you can consider one of the following options.

Arm Limited Neoverse Family

We will briefly describe the official offer of Arm Ltd. As we mentioned before, it basically sells the blueprints that you need to implement into your hardware design and then manufacture.

Neoverse is the family of systems on a chip (SoC). Contrary to traditional computing, ARM processors do not have motherboards that are coupled with other important components into a circuit board. An SoC is an integrated circuit that already contains all those components: a CPU, storage, input/output ports, and sometimes a GPU. 

The ARMv8-A extensions solved the old problem of incompatibility between ARM-based machines and 64-bit applications and operating systems. Besides, the ARMv8-A architecture has enhanced capabilities for performing calculations with integer and float numbers due to Scalable Vector Extensions (SVE). It also allows efficient memory partitioning and monitoring and has various security features.

Neoverse outperforms Intel chips by 40% so far. Combined with less power consumption and smaller sizes that reduce the physical facilities you need to rent, an ARM data center seems to offer a bright future of distributed computing.

Apple M1: Maybe

In November 2020, Apple announced its divorce from Intel and the beginning of a new era. It would replace all traditional chips with SoCs in their Mac machines within the next two years. After less than one year, only three Mac models have undergone the transformation.

But a lot of software developers started to hope for the emergence of M1-based data centers. That would definitely eliminate the second challenge around ARM-based clouds.

The challenge is that you would not only need to take care of the manufacturing and installation of your hardware, but also to optimize your existing applications to make them run in your new ARM-based data center. With the M1 transformation finished, the applications can be developed on ARM machines and then directly deployed into an ARM-based cloud.

The Realistic Option: AWS Graviton Processor

Those who use Amazon Elastic Compute Cloud (Amazon EC2) can see Neoverse cores in action: they are implemented as AWS Graviton2 processors.

With this option, you simply subscribe to Amazon EC2 and use it usually, but configure it to run on AWS Graviton. It is highly recommended for applications that can benefit from smaller but faster cores, including but not restricted to web applications, containerized microservices, eject-transform-load (ETL) pipelines, online games, video encoding applications, and high-performance computing in general. 

AWS Graviton-based EC2 instances open the doors for more cost savings. Their power allows you to use open-source databases, such as MySQL, MariaDB, and PostgreSQL. They are known to be memory-intensive but, as any open-source, are free to use. You can deploy them on AWS EC2 and let them run on ARM-based capacities.

AWS Graviton is not limited to memory-consuming technologies. You can use it for burstable general-purpose workloads, as well as compute-intensive workloads including real-time analytics, since, as any ARM, it can scale quickly.

Indeed, if you decide to subscribe to an Amazon pay-as-you-go service, you will need to fit into the Amazon architecture and Amazon pricing model. We are not discouraging you from doing this. But we have a better alternative — or a compromise, depending on your needs. Before we explain it in detail, let us walk you through an important preparation step that lies between you and your dream ARM-based cloud. 

No worries, our alternative solution will include this preparation as well. We still want to show you the standard way of doing things.

How to Prepare for Leveraging This Technology

As with any cloud migration, you will need to do some adjustments to your existing applications before you can move them into the cloud. A new ARM-based environment requires even more fundamental changes since we know that the very CPU architecture of ARM-based servers is different.

But we do not mean that those are changes you won’t manage to perform. You need to ensure that your applications will be compatible with the new environment. And you do not need to re-build them completely. Instead, use containerization technology.

Your First Container for ARM: A Short Overview of the Steps

Containers are virtual envelopes for software applications that include the application source code and a minimum set of environmental elements — libraries and selected operating system components — that allow running these applications under any operating system and in the cloud, making them OS-agnostic. 

Docker is the most popular tool for creating containers. In 2019, Docker and ARM established a partnership to help their users in moving containerized applications to ARM-enabled platforms. Since then, Docker has implemented a multi-architecture imaging feature. It allows you to build containers that can be deployed on any CPU architecture.

The whole procedure is done with the single “docker build” command, provided that you have installed the Docker. To generate a container that is compatible with RISC architecture, simply use the plug-in Docker Buildx together with the build command and specify which platform you need it for. Buildx must be set as the default builder. It is described in more detail in the Docker documentation.

Every time you build a container, it is stored in a GitHub repository, from which you need to download it, in the GitHub language, to clone the repository to your local machine.

Indeed, when you have many applications or small microservices, it starts to look like a lot of manual work prone to human errors that you cannot allow in an enterprise environment.

Now, let us present the alternative we were talking about before. You can deploy your containerized applications on AWS with a Platform-as-a-Service solution: Engine Yard Kontainers.

Deploy and Manage Your AWS Applications

Engine Yard is more than just a platform. We are a team that can help you to perform a transition to the new ARM-based Amazon EC2 infrastructure. At Engine Yard, we assist you in migrating your applications to containers and then to the ARM-based cloud. We share with you predefined templates and detailed documentation. If there are any questions left, you can always approach our expert team.

Deployments happen with a simple Git push and take only minutes to complete. You can fine-tune regions to deploy and have full control over the memory and your CPU usage. 

With Engine Yard’s log aggregation feature, you can monitor your failed applications from a single universal cockpit. We enable you to dig into the failures and anomalies in resource usage and set up notifications to act fast on any troubles. 

We can also help you in moving your databases into AWS services. Engine Yard offers automated backup and recovery for any kind of database.

Once the migration is done, you can manage your AWS instances yourself or outsource this task to our team. You can create a clone or copy of your environments to separate production and development stages. 

Pricing Advantages: Transparency and Full Control

Although ARM-based cloud computing is generally cheaper, we understand that you still want to have a clear overview of your costs. With Engine Yard, you not only customize your working environments but can also set business rules that allow you to stop worrying about provisioning going out of control. You can scale up your applications without skyrocketing your costs. 

Conclusion

In this article, we tried to touch on the long story of two types of CPU architectures — RISC and CISC — and how the potential of the former remained dormant for years but is rapidly unfolding now. 

Reduced instruction set computers are back on stage. This is an inspiring trend in cloud computing that promises radical cost reduction and better computing power. 

However, the underlying differences between the two architecture types would require you to modify your applications before moving them into an ARM-based cloud. An easier way to prepare your applications is to containerize them. Containerization preserves the core functionalities and makes your apps OS- and CPU-architecture agnostic.

For a secure transition, you may need a deployment management platform that is simple to operate and allows you to monitor your cloud costs.

With Engine Yard, you can move your applications into the ARM-based Amazon cloud and deploy and manage newly built applications with a simple PaaS solution. 

To learn more about ARM support and performance benefits, reach out to ARM - Developer support or arm community

Top comments (0)