DEV Community

Anzal Beg
Anzal Beg

Posted on

OpenShift Installation Using UPI on VMware vSphere Machines

Table of Contents

  1. Introduction
  2. Prerequisites
  3. Required Hardware Specifications
  4. User-Provisioned DNS Requirements
  5. Load Balancing Requirements for User-Provisioned Infrastructure
  6. Validating DNS Resolution
  7. Generating a Key Pair for SSH Access
  8. Obtaining the Installation Program
  9. Download the RHCOS images
  10. Installing the OpenShift CLI
  11. Creating the Installation Configuration File
  12. Creating the Kubernetes Manifest and Ignition Config Files
  13. Creating a Web Server on the same Installation VM/Different VM
  14. Copy the ignition files to the Web Server
  15. Boot ISO Image Preparation Using Custom ISO and Ansible Automation
  16. Login to the Cluster
  17. Conclusion**

Introduction

Red Hat OpenShift is a powerful Kubernetes platform for automating the deployment, scaling, and management of containerized applications. This guide provides a step-by-step process to install OpenShift v4.14 on a vSphere environment. By following these instructions, you can set up a robust and scalable infrastructure for your containerized applications.

Prerequisites

Before you begin the installation, ensure you have the following:

  • Necessary hardware specifications met.
  • Access to the vSphere environment to upload the ISO file.
  • Red Hat OpenShift Cluster Manager account to access the download key.
  • Necessary DNS and load balancing configurations.
  • SSH access to the cluster nodes.
  • HAProxy installed and configured on the load balancer node.

Required Hardware Specifications

For an OpenShift installation on VMware using User-Provisioned Infrastructure (UPI), the typical node/vm requirements are as follows:

Bootstrap Node:

Count: 1
Purpose: Used for the initial bootstrap process of the OpenShift cluster. It is removed after the installation is complete.
Requirements:
CPU: 4 vCPUs
Memory: 16 GB RAM
Storage: 120 GB

Control Plane (Master) Nodes:

Count: 3 (Recommended for high availability)
Purpose: Manage the OpenShift cluster and its components.
Requirements:
CPU: 4 vCPUs
Memory: 16 GB RAM
Storage: 120 GB

Compute (Worker) Nodes:

Count: At least 2 (Can be scaled based on workload requirements)
Purpose: Run the user workloads (applications).
Requirements:
CPU: 2-4 vCPUs (Depending on the workload)
Memory: 8-16 GB RAM (Depending on the workload)
Storage: 120 GB

Infrastructure Nodes (Optional):

Count: As needed (Typically 2-3 for large clusters)
Purpose: Dedicated to running infrastructure-related workloads like registry, monitoring, logging, etc.
Requirements:
CPU: 4 vCPUs
Memory: 16 GB RAM
Storage: 120 GB

Example for a Basic Setup:

Bootstrap Node: 1
Control Plane Nodes: 3
Compute Nodes: 2
Summary of VMs:
Total Nodes: 6
Total vCPUs: 28-32
Total Memory: 88-96 GB
Total Storage: 720 GB

Example of Production Setup:

To successfully install OpenShift, the following hardware specifications are required:

Machine Memory CPU Storage Notes OS
Bootstrap 16 GB 4 50 GB Used only during installation RHCOS
Helper Machine 8 GB 4 50 GB Web server for deployment purposes Ubuntu
HA Proxy Node #1 16 GB 4 50 GB Load Balancer node Ubuntu
HA Proxy Node #2 16 GB 4 50 GB Load Balancer node Ubuntu
Controller Node #1 32 GB 16 120 GB RHCOS
Controller Node #2 32 GB 16 120 GB RHCOS
Controller Node #3 32 GB 16 120 GB RHCOS
Worker Nodes #1 64 GB 16 120 GB RHCOS
Worker Nodes #2 64 GB 16 120 GB RHCOS
Worker Nodes #3 64 GB 16 120 GB RHCOS
Worker Nodes #4 64 GB 16 120 GB RHCOS
Worker Nodes #5 64 GB 16 120 GB RHCOS
Infra Node #1 32 GB 10 120 GB + 2 TB (HDD/SSD) RHCOS
Infra Node #2 32 GB 10 120 GB + 2 TB (HDD/SSD) RHCOS
Infra Node #3 32 GB 10 120 GB + 2 TB (HDD/SSD) RHCOS

User-Provisioned DNS Requirements

Ensure that your DNS and hostnames are configured as follows:

DNS/Hostname IP IP Description
api.openshift.onlinecluster.com 174.168.68.57 F5 VIP Pointing to the internal load balancer haproxy servers
api-int.openshift.onlinecluster.com 174.168.68.57 F5 VIP Pointing to the internal load balancer haproxy servers
*.apps.openshift.onlinecluster.com 174.168.68.57 F5 VIP Pointing to the internal load balancer haproxy servers
oocbs01.openshift.onlinecluster.com 174.168.95.70 Bootstrap VM IP
ooccn01.openshift.onlinecluster.com 174.168.95.74 Master Node VM IP
ooccn02.openshift.onlinecluster.com 174.168.95.75 Master Node VM IP
ooccn03.openshift.onlinecluster.com 174.168.95.74 Master Node VM IP
oocinfra01.openshift.onlinecluster.com 174.168.95.74 Worker Node Role Infra IP
oocinfra02.openshift.onlinecluster.com 174.168.95.74 Worker Node Role Infra IP
oocinfra03.openshift.onlinecluster.com 174.168.95.74 Worker Node Role Infra IP
oocwn01.openshift.onlinecluster.com 174.168.95.74 Worker Node IP
oocwn02.openshift.onlinecluster.com 174.168.95.74 Worker Node IP
oocwn03.openshift.onlinecluster.com 174.168.95.74 Worker Node IP
oocwn04.openshift.onlinecluster.com 174.168.95.74 Worker Node IP
oocwn05.openshift.onlinecluster.com 174.168.95.74 Worker Node IP

Load Balancing Requirements for User-Provisioned Infrastructure

Proper load balancing is essential for high availability and scalability. Here are the load balancer configurations:

API Load Balancer

Install HAProxy on your HAProxy node using the following commands:

sudo apt update && sudo apt upgrade
sudo apt install haproxy
Enter fullscreen mode Exit fullscreen mode

Edit the HAProxy configuration file:

sudo nano /etc/haproxy/haproxy.cfg
Enter fullscreen mode Exit fullscreen mode

Add the following configuration:

global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

    # Default SSL material locations
    ca-base /etc/ssl/certs
    crt-base /etc/ssl/private

    ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
    ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
    ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

frontend api
    bind 174.168.95.72:6443
    default_backend api
    mode tcp

frontend machine-api
    bind 174.168.95.72:22623
    default_backend machine-api
    mode tcp

backend api
    balance roundrobin
    mode tcp
    server OOCCN01 ooccn01.openshift.onlinecluster.com:6443 check
    server OOCCN02 ooccn02.openshift.onlinecluster.com:6443 check
    server OOCCN03 ooccn03.openshift.onlinecluster.com:6443 check
    server OOCBS01 oocbs01.openshift.onlinecluster.com:6443 check

backend machine-api
    balance roundrobin
    mode tcp
    server OOCCN01 ooccn01.openshift.onlinecluster.com:22623 check
    server OOCCN02 ooccn02.openshift.onlinecluster.com:22623 check
    server OOCCN03 ooccn03.openshift.onlinecluster.com:22623 check
    server OOCBS01 oocbs01.openshift.onlinecluster.com:22623 check
Enter fullscreen mode Exit fullscreen mode

Ingress Load Balancer

Edit the HAProxy configuration file:

sudo nano /etc/haproxy/haproxy.cfg
Enter fullscreen mode Exit fullscreen mode

Add the following configuration:

frontend ingress-http
    bind 174.168.95.72:80
    default_backend ingress-http
    mode tcp

frontend ingress-https
    bind 174.168.95.72:443
    default_backend ingress-https
    mode tcp

backend ingress-http
    balance roundrobin
    mode tcp
    server OOCWN01 oocwn01.openshift.onlinecluster.com:80 check
    server OOCWN02 oocwn02.openshift.onlinecluster.com:80 check
    server OOCWN03 oocwn03.openshift.onlinecluster.com:80 check
    server OOCWN04 oocwn04.openshift.onlinecluster.com:80 check
    server OOCWN05 oocwn05.openshift.onlinecluster.com:80 check

backend ingress-https
    balance roundrobin
    mode tcp
    server OOCWN01 oocwn01.openshift.onlinecluster.com:443 check
    server OOCWN02 oocwn02.openshift.onlinecluster.com:443 check
    server OOCWN03 oocwn03.openshift.onlinecluster.com:443 check
    server OOCWN04 oocwn04.openshift.onlinecluster.com:443 check
    server OOCWN05 oocwn05.openshift.onlinecluster.com:443 check
Enter fullscreen mode Exit fullscreen mode

Validating DNS Resolution

Ensure that DNS is properly resolving. Use the following commands to validate:

dig +noall +answer 174.168.1.1 api.openshift.onlinecluster.com 
api.openshift.onlinecluster.com.   3600    IN      A       174.168.68.57

dig +noall +answer 174.168.1.1 .apps.openshift.onlinecluster.com 
.apps.openshift.onlinecluster.com. 3600 IN    A       174.168.1.1

dig +noall +answer 174.168.1.1 oocbs01.ajsb.ajman.ae
oocbs01.openshift.onlinecluster.com. 3600 IN A       174.168.95.70
dig +noall +answer  174.168.1.1 -x 174.168.68.57
57.68.168.174.in-addr.arpa. 3600 IN     PTR     *.apps.openshift.onlinecluster.com.
57.68.168.174.in-addr.arpa. 3600 IN     PTR     api-int.openshift.onlinecluster.com.
57.68.168.174.in-addr.arpa. 3600 IN     PTR     api.openshift.onlinecluster.com.
57.68.168.174.in-addr.arpa. 3600 IN     PTR     *.apps.openshift.onlinecluster.com.


Enter fullscreen mode Exit fullscreen mode

Generating a Key Pair for SSH Access

Generate an SSH key pair for access to the nodes:

ssh-keygen -t rsa -b 4096 -N '' -f ~/.ssh/id_rsa
Enter fullscreen mode Exit fullscreen mode

This command generates a public and private key pair. Ensure the public key (~/.ssh/id_rsa.pub) is added to the nodes.

Obtaining the Installation Program

To download the OpenShift installer:

export OCP_RELEASE=4.14.0
export RELEASE_IMAGE=quay.io/openshift-release-dev/ocp-release:$OCP_RELEASE-x86_64
curl -o openshift-install-linux.tar.gz https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_RELEASE/openshift-install-linux-$OCP_RELEASE.tar.gz
tar -zxvf openshift-install-linux.tar.gz -C /usr/local/bin


Enter fullscreen mode Exit fullscreen mode

Installing the OpenShift CLI

To install the oc command-line tool:

curl -o openshift-client-linux.tar.gz https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_RELEASE/openshift-client-linux-$OCP_RELEASE.tar.gz
tar -zxvf openshift-client-linux.tar.gz -C /usr/local/bin
Enter fullscreen mode Exit fullscreen mode

Download your installation pull secret from the Red Hat OpenShift Cluster Manager https://console.redhat.com/openshift/install/pull-secret.

Download the RHCOS Images

curl -o rhcos-4.14.0-x86_64-live.x86_64.iso  https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.14/4.14.0/rhcos-4.14.0-x86_64-live.x86_64.iso    
curl -o rhcos-4.14.0-x86_64-metal.x86_64.raw.gz https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.14/4.14.0/rhcos-4.14.0-x86_64-metal.x86_64.raw.gz
Enter fullscreen mode Exit fullscreen mode

Creating the Installation Configuration File

Create a configuration file for the installation:
Create a new directory:

mkdir -p /home/user/ocp-install/
cd /home/user/ocp-install
mv /<download-dir>/ openshift-install .
openshift-install create install-config --dir=.
Enter fullscreen mode Exit fullscreen mode

Edit the generated install-config.yaml to match your environment settings, including platform, base domain, and cluster name.

apiVersion: v1
baseDomain: onlinecluster.com 
compute:
- hyperthreading: Enabled   
  name: worker
  replicas: 0 
controlPlane:
  hyperthreading: Enabled   
  name: master
  replicas: 3 
metadata:
  name: ooc 
platform:
  vsphere:
    vcenter: "https://*****" 
    username: "********"
    password: "******** "
    datacenter: "<datacenter name>"
    defaultDatastore: "default data store name"
    folder: "/folder structure path, where vm's will be residing" 
fips: false 
pullSecret: '....' 
sshKey: '..........' 
Enter fullscreen mode Exit fullscreen mode

Creating the Kubernetes Manifest and Ignition Config Files

Generate the necessary Kubernetes manifest and Ignition config files:

openshift-install create manifests --dir=.
Enter fullscreen mode Exit fullscreen mode

Open the /manifests/cluster-scheduler-02-config.yml file. locate the mastersSchedulable parameter and ensure that it is set to false. Save and exit.

rm -f openshift/99_openshift-cluster-api_master-machines-.yaml openshift/99_openshift-cluster-api_worker-machineset-.yaml

openshift-install create ignition-configs --dir=.
Enter fullscreen mode Exit fullscreen mode

Creating a Web Server on the same Installation VM/Different VM

Update Your System
The first step is to update your package index. Open a terminal and run the following command:

sudo apt update
Enter fullscreen mode Exit fullscreen mode

Install Apache:

sudo apt install apache2
Enter fullscreen mode Exit fullscreen mode

After installation completes, Apache should start automatically. To verify:

sudo systemctl status apache2
Enter fullscreen mode Exit fullscreen mode

Copy the ignitions files to Web Server

 sudo cp *.ign /var/www/html/
 ls -ltr /var/www/html/
 sudo chmod 775  /var/www/html/*.ign
Enter fullscreen mode Exit fullscreen mode

Put the rhcos-4.14.0-x86_64-metal.x86_64.raw.gz in side the /var/www/html/

From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node:

curl -k http://<HTTP_server>/bootstrap.ign 
Enter fullscreen mode Exit fullscreen mode

Boot ISO Image Preparation

Step 1: Boot ISO Image Using coreos-iso-maker

  1. Download coreos-iso-maker: Use the provided GitHub repository link to download and set up coreos-iso-maker for generating the custom ISO image.
git clone https://github.com/chuckersjp/coreos-iso-maker.git
cd coreos-iso-maker
Enter fullscreen mode Exit fullscreen mode
  1. Modify inventory.yaml:

Navigate to the coreos-iso-maker directory and modify inventory.yaml as shown. Update IP addresses, network settings, and other configurations as needed.

# Example snippet from inventory.yaml
all:
  children:
    bootstrap:
      hosts:
        oocbs01.openshift.onlinecluster.com:6443:
          ipv4: 174.168.95.70
    master:
      hosts:
        dcajsbcn01.openshift.onlinecluster.com:
          ipv4: 174.168.95.74
        dcajsbcn02.openshift.onlinecluster.com:
          ipv4: 174.168.95.75
        dcajsbcn03.openshift.onlinecluster.com:
          ipv4: 174.168.95.76
    worker:
      hosts:
        oocwn01.openshift.onlinecluster.com:
          ipv4: 174.168.95.80
        oocwn02.openshift.onlinecluster.com:
          ipv4: 174.168.95.81
        oocwn03.openshift.onlinecluster.com:
          ipv4: 174.168.95.82
        oocwn04.openshift.onlinecluster.com:
          ipv4: 174.168.95.83
        oocwn05.openshift.onlinecluster.com:
          ipv4: 174.168.95.84
        oocinfra01.openshift.onlinecluster.com:
          ipv4: 174.168.95.77
        oocinfra02.openshift.onlinecluster.com:
          ipv4: 174.168.95.78
        oocinfra03.openshift.onlinecluster.com:
          ipv4: 174.168.95.79
Enter fullscreen mode Exit fullscreen mode
  1. Modify group_vars/all.yml: Update the network and other configurations in the group_vars/all.yml file.
# group_vars/all.yml
---
# If only one network interface
gateway: 174.168.95.1
netmask: 255.255.255.0
# VMWare default ens192
# KVM default ens3
# Libvirt default enp1s0
# Intel NUC default eno1
interface: ens192

dns:
  - 174.168.1.1
  - 174.168.7.7

webserver_url: <current installer VM IP>
webserver_port: 80
# Ignition subpath in http server (optional, defaults to nothing)
#webserver_ignition_path: http://192.168.66.12/master.ign
# Path to download master ignition file will be
# http://192.168.1.20:8080/ignition/master.ign

# Drive to install RHCOS
# Libvirt - can be vda
install_drive: sda

# Timeout for selection menu during first boot
# '-1' for infinite timeout. Default '10'
boot_timeout: 10

# Choose the binary architecture
# x86_64 or ppc64le
arch: "x86_64"

ocp_version: 4.14.0
iso_checksum: d15bd7ae942573eece34ba9c59e110e360f15608f36e9b83ab9f2372d235bef2
iso_checksum_ppc64: ff3ef20a0c4c29022f52ad932278b9040739dc48f4062411b5a3255af863c95e
iso_name: rhcos-{{ ocp_version }}-x86_64-live.x86_64.iso
iso_name_ppc64: rhcos-{{ ocp_version }}-ppc64le-installer.ppc64le.iso
rhcos_bios: rhcos-{{ ocp_version }}-x86_64-metal.x86_64.raw.gz
Enter fullscreen mode Exit fullscreen mode

Step 2: Ansible Playbook Execution

  1. Run Ansible playbook: Execute the Ansible playbook (playbook-single.yml) to automate the provisioning of OpenShift nodes.
#Execute Ansible playbook
ansible-playbook playbook-single.yml -K
Enter fullscreen mode Exit fullscreen mode

Step 3: Uploading and Provisioning VMs

  1. Upload ISO for Bootstrap Node:

  2. Upload the custom ISO to the VM environment. Ensure disk.enableUUID attribute is set to true in advanced settings.
    Provision Bootstrap Node: Wait for the bootstrap node to complete installation until the login screen appears.

  3. Continue with Master and Worker Nodes: Follow the same process to upload and provision first the master nodes and then the worker nodes.

Step 4: Monitoring and Verification

Monitor Bootstrap Process:

Check the progress of the bootstrap process using the OpenShift installer command.

# Monitor bootstrap process
./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level=info
Enter fullscreen mode Exit fullscreen mode

Verify Node Status: Confirm that all nodes are ready by checking their status.

# Verify node status
oc get nodes
Enter fullscreen mode Exit fullscreen mode

Logging in to the cluster by using the CLI

  • You installed the oc CLI.
  • Export the kubeadmin credentials:
  • export KUBECONFIG=/auth/kubeconfig
  • For , specify the path to the directory that you stored the installation files in.
  • oc whoami
  • sample output : system:admin

Image description

  • To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
Enter fullscreen mode Exit fullscreen mode
  • After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:
oc get nodes
Enter fullscreen mode Exit fullscreen mode

Top comments (0)