Introduction
Ceph is a free and open-source software-defined storage platform that provides object storage, block storage, and file storage built on a common distributed cluster foundation.
For us, it is meant to serve as an alternative to AWS S3 and MinIO.
Node Setup(Server)
For each node(server) that will be added to the Ceph cluster, it ideal to have 2 storage allocated to it.
The first storage should be small and dedicated to the system root
The second should be as large as you want for the purpose of object storage and will be mapped as a device and used for OSD in Ceph. Then use the below command to extend it
Make sure to create LVM for the first storage. Don’t tamper on the second storage as Ceph will adjust it as necessary.
Once the OS is installed and configured, Use the below command to extend the lvm accordingly.
sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
Then use the below command to resize the filesystem accordingly. This will ensure that your root partition has adequate storage for ceph download later.
sudo resize2fs /dev/ubuntu-vg/ubuntu-lv
Use the below command to view the sizing status
lsblk
Installing Cephadm
Install cephadm command so we can use it to bootstrap our ceph setup. Follow the below steps to achieve this.
Update apt packages first to have latest version of the components.
sudo apt update
Install chrony for network time syncing. Required.
sudo apt install chrony
Check the Chrony was successfully installed
chronyc -v
Install docker
sudo apt install docker.io
Confirm that docker was successfully installed
docker -v
Confirm the LVM commands is available
pvcreate
Confirm that python3 is running
python3 --version
If you wish, make python command resolve to python3. Not required
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 2
Install CephAdmin. This is what you will use to bootstrap ceph
sudo apt install -y cephadm
Confirm that Cephadmin is successfully installed
which cephadm
Bootstrap Ceph
After installing Cephadm, we will now use it to bootstrap our ceph setup. You need internet connection for the following steps as ceph will be downloaded as part of the steps.
Bootstrap Ceph using cephadm
replace with the ip address of where the monitor engine is running. Since this is a single host, all the ceph engines are running from the same host; so, i will just replace this with the ip address of the host.
You can also change the initial dashboard user from admin to whatever else you wish
You can also change the initial dashboard password from exampleXX123 to whatever else you wish
sudo cephadm bootstrap --mon-ip <host-ip-addr> --dashboard-password-noupdate --initial-dashboard-user admin --initial-dashboard-password exampleXX123
Once Ceph has been successfully bootstrapped, you should see the details of the dashboard. However, below are some relevant Ceph services you should note. You need to wait for some minutes for some of these services to fully start. You can map the ip address to a domain as you wish.
_Ceph Dashboard - https://10.10.0.125.8443
Ceph Exporter - http://10.10.0.125:9283/metrics
Prometheus - http://10.10.0.125:9095/metrics
Alert Manager - http://10.10.0.125:9093/
Grafana - https://10.10.0.125:3000/
API Documentation - https://10.10.0.125:8443/docs_
You need to install additional packages in order to use other ceph commands. This is for convenience. Install ceph-common, rados-gw, ceph base packages so you can execute ceph commands.
sudo cephadm add-repo --release octopus
sudo cephadm install ceph-common
Check status and version of ceph
sudo ceph -s
sudo ceph -v
Confirm Ceph configuration file
ls -l /etc/ceph
Check all Ceph processes running
sudo ceph orch ps
Check Ceph Health Status
ceph health detail
In order to successfully setup the rgw gateway, you must eliminate all errors and warnings from the dashboard. A key reason rgw gateway will not setup is if there are still errors in the Dashboard.
Settings(For a single node cluster to help remove the health warnings)
Turn off the warning for pools without replication
sudo ceph config set mon mon_warn_on_pool_no_redundancy false
Allow pool deletion, turn off the settings.
sudo ceph config set mon mon_allow_pool_delete true
Reconfigure default osd pool size
sudo ceph config set global osd_pool_default_size 1
Setup Ceph OSD(This can be done from the interface or from terminal as below)
View devices available to your Ceph cluster for OSD
sudo ceph orch device ls
Zap(format) the device that you want to map OSD to.
sudo ceph orch device zap cephadmin </dev/sdb> --force
To see all OSDs configured on your Ceph Cluster
sudo ceph osd tree
View all available pools
sudo ceph osd lspools
Add OSDs
sudo ceph orch daemon add osd <hostname>:</dev/sdb>
Create Pools
Ensure that the value for pg-size and pgp-size is a multiple of 2
sudo ceph osd pool create <pool-name> <pg-size> <pgp-size>
RGW Setup
sudo radosgw-admin realm create --rgw-realm=<realm-name> --default
sudo radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default
sudo radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default
sudo radosgw-admin period update --rgw-realm=<realm-name> --commit
sudo ceph orch apply rgw <realm-name> <zone-name> --placement="<num-daemons> <host1>"
e.g. The below will create a rgw gateway with global as realm and tutorial as the zone with 1 placement(daemon) on the host 10.10.0.125.
sudo ceph orch apply rgw global tutorial --placement="1 10.10.0.125”
Create S3 users for RGW.
Take note of the access key and secret key of the user you create especially the user that was created with the --system flag as that is the one whose access key and secret key you will use to activate the object gateway view on the portal.
radosgw-admin user create --uid="<username>" --display-name="<user display name> --system"
e.g
radosgw-admin user create --uid=devto --display-name="Dev To” --system
View all rgw users
sudo radosgw-admin user list
Retrieve information about a user
sudo radosgw-admin user info --uid=<user-id>
e.g The below retrieves all the information about the user iudoh
sudo adosgw-admin user info --uid=devto
In order for the object gateway to be visible on the portal, you need map the access key and secret key of the user that was created with the --system flag to the ceph dashboard.
Use the below command to do so.
Make sure you copied the access key of the user to a file and the secret key to another file and specify their names accordingly in the below commands.
ceph dashboard set-rgw-api-access-key -i <file-containing-access-key>
ceph dashboard set-rgw-api-secret-key -i <file-containing-secret-key>
You should be able to access the Object gateway view on the ceph dashboard now and create more users and buckets.
Per adventure you want information on how to access and push items to the buckets you have created. Use the below link I found online.
How To Configure AWS S3 CLI for Ceph Object Gateway Storage
Top comments (0)