The AlmaLinux OS Foundation has announced support for AlmaLinux on the IBM z (s390x) platform. Aside from ticking all the boxes and locking in parity with RHEL, why has the project made such a commitment in an era of managed services and cloud-native platforms?
There is a misconception that mainframes are relics of the past. Frequently, they are portrayed as monstrous systems, bristling with dusty old hardware residing in cabinets reminiscent of the black monolith from Kubrik’s 2001: A Space Odyssey. The reality isn’t as nostalgic—or as dusty.
Modern mainframes, mainly from IBM, which dominates the market, run applications that we use every day. The most well-known use-case is the millions of ATM withdrawals and credit card transactions the world makes each day but includes other financial transactions, customer order processing, production and inventory control, and payroll.
The Rise of Linux
Without the large groundswell of support for Linux on IBM Z (as it came to be called) and IBM’s early hardware support for Linux, it’s unlikely AlmaLinux would be in a position to announce a new Linux distribution for mainframes. It’s also the level of Linux adoption among the majority of large mainframe enterprises that makes Almalinux’s support a good decision.
The history of Linux within the mainframe world is longer than you might expect. After an initial proof of concept, mainframes officially started supporting Linux on IBM Z in 2000. IBM went on to develop the IFL (Integrated Facility for Linux) processor, which runs Linux workloads exclusively. This allows Linux workloads to run on the same systems as traditional mainframe operating systems without affecting the cost of those workloads.
While other vendors launched commercial Linux distributions and support packages, such as Red Hat Enterprise Linux, SUSE Linux Enterprise (and, more recently, Ubuntu), the mainframe lacked an open source community edition. Unlike many Linux distributions, the mainframe community did not have a downloadable open source Linux operating system they could experiment with and build appliances on without any ties to a commercial offering.
One strong contributor and advocate for Linux and open source on the mainframe, Sine Nomine Associates, an IT research and custom development engineering firm, observed this gap and approached the Community ENTerprise Linux Operating System (CentOS) project. CentOS developed and supported a functionally compatible Linux distribution based on upstream Red Hat Enterprise Linux. CentOS had struggled to support mainframes itself because of the significant hardware investment required, so Sine Nomine forked and compiled its own distribution called ClefOS. Other Linux distributions became active in the IBM Z space, such as Debian, OpenSUSE, and Slackware.
Because of the mainframe's core design values of compatibility, reliability, availability, scalability, and security, it isn’t simply legacy tech that’s reluctantly supported by enterprises. Mainframes are a crucial part of many digital transformations and modernization programs. This means that Linux on the mainframe, either supported by commercial contracts or running community editions like AlmaLinux, often sits at the core of IT organizations. Outside of banking, industries such as finance, health care, insurance, retail, hospitality, utilities, government, and essentially any industry hosting mission-critical applications have this demand. They demand zero downtime, and are running at close to peak utilization 24/7, while handling high throughput volumes.
AlmaLinux came to the attention of Sine Nomine when Red Hat announced the end of support for CentOS 8 and the move to CentOS Stream. The firm has been instrumental in bringing other open source software, such as PostgreSQL, to the platform and wanted to continue supporting a community edition. They needed to find a CentOS alternative with 1:1 binary compatibility with RHEL and AlmaLinux was the perfect fit. The project and the AlmaLinux OS Foundation matched Sine Nomine’s open source philosophy, and AlmaLinux has gone on to partner with Sine Nomine to provide resources, share knowledge and ensure users have the support they might need.
To understand why mainframes remain relevant, it’s important to comprehend their architecture. Mainframes are packed with processors (up to 190 independently configurable cores on an IBM z15 system), up to 40TB of Redundant Array of Independent Memory (RAIM) along with plenty of I/O adapters in a single box. They are designed to be as reliable as possible while handling high volumes of input and output (I/O). Their architecture maximizes throughput predominantly for high volumes of transaction processing and batch workloads in a very secure fashion.
To optimize performance, a mainframe is equipped with specialty cards for network, storage, compression, and cryptography. Each card has its own processor and memory.
Each mainframe central processing unit (CPU) or core can also be defined as an IBM z Integrated Information Processors (zIIPs), designed to offload Java and database work, along with Linux workloads when running within z/OS Container Extensions (zCX) .
Additionally, mainframes cores include System Assist Processors (SAPs), effectively highly efficient traffic controllers, and health checkers, ensuring high availability. SAPs speed up data transfers between the operating system and I/O. If an SAP sees a failure, including a CPU failure, it will dynamically swap over to one of the spare CPUs automatically and call home ,allowing IBM to work with the customer to make sure there is not a greater problem that needs to be addressed.
The primary benefits of this hardware approach are that a mainframe can tackle complex workloads entirely in memory, rather than spread across multiple x86 servers, and run at 100% capacity without depreciation. Above 100%, a mainframe may slow down, but it won’t freeze if it encounters a very IO-bound workload, such as database work, which would likely happen with x86 architecture. Part of the reason for this is that the central processor unit offloads IO work to processors available in a subsystem.
The CPUs only run cycles associated with your application that require computing power. In contrast, Kurt Acker, Principal IT Architect at Sine Nomine Associates says: “You can’t control what each chip does on a UNIX or x86 box. A chip will perform work assigned to it by the OS, and, although hypervisors enable you to carve up those resources, it isn’t to the same extent as mainframes.”
IBM’s z/15 introduced encryption everywhere by moving support for this functionality onto its chips. During the entire processing of a workload, the transaction and all data can automatically be fully encrypted.
For a platform that’s frequently labeled a ‘dinosaur,’ mainframes have not only avoided a mass extinction event but continued to live on and thrive—IBM reported that, in Q4 2020, 67 of the Fortune 100 continue to use the mainframe.
Most notably in banking, mainframe adoption rests at a staggering 90% among the top 50 banks, which benefit from its reliability and built-in redundancies. IBM also says that four of the top five airlines rely on mainframes for ticketing and their ability to handle high volume transactions in real-time with its transaction processing facility (zTPF) operating system.
Since the introduction of IBM’s System/360 in 1964, each new ‘Big Iron’ platform has advanced the mainframe’s technological capabilities. The new IBM mainframe that arrived in the first half of 2022 reflects this roadmap with the Telum processor inside it, which packs in 22.5 billion transistors. The Telum chip will not only enable organizations to run more transaction processing and batch processing workloads, reducing the unit cost of infrastructure and lowering cost per transaction, but it will also add scalable AI inference performance at very low latency to the mainframe’s capabilities.
Mainframes: playing to their strengths
When assessing what platform to use, including cloud or mainframe, each platform needs to be viewed objectively and each has unique strengths and factored into why AlmaLinux OS is now supported.
Linux is Linux: One positive for AlmaLinux supporting Linux on mainframes is that it’s another opportunity to help extend Linux’s reach and solidify its position. It also means organizations can leverage that dominance—it is much easier to onboard a new mainframe operator when they can use familiar Linux commands. The key difference is understanding what resources are behind a system when it is run on shared architecture, but this is true for any hypervisor environment. While a new mainframe operator from a Linux background learns to adjust to the performance differences, they can still ssh in and experience an x86 environment.
Strong security: As they are designed with built-in mechanisms for zero trust, users can extend secure services running on a Linux mainframe so they benefit the cloud. For example, workloads that you can’t and wouldn’t want to run on a mainframe, such as office productivity services, can have their security controls managed and pushed out from a centralized mainframe.
Linux co-located with data works: Linux on the mainframe makes a lot of sense for specific use cases because it enables co-location with the data. A user can have their chosen database on a mainframe and have frontend services communicating with it. Underneath the covers, a mainframe uses its internal circuitry to pass those messages back and forth swiftly. For example, a mainframe’s OSA card, the equivalent of a network card but with a much larger cache, can control inbound data flow to the CPU and (as part of that IO subsystem) allow the system to work efficiently. Essentially, once a trusted message comes into your system, you can completely process everything that's within your control before you send it back out.
For example, when tasked with developing a reservation system, Sine Nomine used MongoDB for front-ending the system. The firm tested the service on both an x86 cloud setup and mainframe: “the x86 cloud setup achieved 3,000 responses per second, but the machine died with 100% loss of information as it couldn't process it, overloaded as it was IO-bound and died,” says Kurt Acker. In contrast, Acker says, “we achieved almost 9,000 responses per second, on a 1GB OSA card. And it didn't die. It just slowed down.”
What this demonstrates is not the poor performance of x86 systems, but the limitations of different systems, their architecture, and how they operate.
Total costs and consolidation
Mainframes are often seen as expensive, and they can be, but it’s a question of scale and considering the math. Going back to the MongoDB example, users are charged not by the CPU but memory used. The x86 cloud setup required data sharding and at least three additional nodes to handle the volume of data being processed. This setup also requires at least two or three backup servers to ensure high availability. On a mainframe, only one live node is required. No data sharding was required as the system was able to process everything with a single server and two backup servers for high availability. Essentially, at least three times the amount of memory is needed to process the same amount of data reliability in an x86 environment, and that only accounts for production.
Of course, the math doesn't always turn in a mainframe’s favor; if you are running less than 100 servers it would make more sense to run them in the cloud. However, once you reach hundreds or thousands of servers, consolidation becomes a key concern. Generally, a mainframe will run with 50% lower electricity costs than cloud, and around 75% less floor space for workloads at scale. Where a zero carbon footprint is a goal for many enterprises now, continued server sprawl isn’t sustainable.
Core consolidation is often overlooked as well. For instance, software that charges by the CPU, you may be able to run 20:1 comparisons of CPU consolidation when compared to x86 systems.
Good code can run anywhere: There’s a misconception that mainframes need extensive rewriting work, but there is interchangeability between z architecture and open systems. The caveat is that code must adhere to coding standards. As long as code is not developed with platform-specific items then code is extremely portable and has the potential to be recompiled and moved anywhere.
When trying to modernize legacy applications, such as COBOL programs, you may want to build a new frontend that is cloud compatible. However, building it on the mainframe platform, thereby allowing the app to be closer to the data, makes sense; it’s in your control, it’s secure, and you can understand the environment and how well it works. If you use this approach, you can decide to move an app to the cloud later as long as you are careful about frontend services and database access you develop. Don’t let the speed of light interfere with your performance expectations.
While some sections of the tech industry would have you believe that mainframes have been relegated to a conversation piece when Netflix decides to option films like Sneakers, there are many credible reasons to use mainframe architecture and support Linux on mainframe. As Kurt says, it is a question of your list of requirements: “Do you need high availability and security, do you understanding the workloads that you have, and where they can and should run?” If you need high availability for an application running on x86, you need huge amounts of clustering, but you may find you can achieve high availability from the hardware itself so it becomes a question of selecting the right architecture for your workloads. “In the case of mainframes, workloads that consolidate well are one example. Java workloads themselves can benefit from pause-less garbage collection, which works in real-time with your system,” says Acker.
Ultimately, why we at AlmaLinux have invested in supporting the architecture boils down to a goal set out for the project from the outset. AlmaLinux wanted to make an open source Linux distribution that is accessible to as many people as possible, and that benefits many different communities. This has led the project to establish initiatives, such as providing live media for individual users, making HPC more accessible to the science community, and, in partnership with Sine Nomine Associates, enabling operators to run Linux on their mainframes.
/ends/
Top comments (0)