DEV Community

Ryan Tiffany
Ryan Tiffany

Posted on

Migrating IBM Cloud Object Storage Data Between Accounts

Today I will be showing you how you can migrate the contents of one IBM Cloud Object Storage bucket to a different instance of ICOS (IBM Cloud Object Storage). We will be using the tool rclone in order to sync the contents between buckets. In this scenario the ICOS instances exist on the same account but the process will work between distinct IBM Accounts as well.

Pre-reqs

  • HMAC credentials generated for each instance of Cloud Object Storage. See this guide for generating ICOS credentials with HMAC.
  • rclone installed. See the official installation docs here.

Configuring rclone

Once you have rclone installed you will need to generate a configuration file that will define our 2 ICOS instances. You can do this by running the command rclone config:

$ rclone config
2020/01/16 09:39:33 NOTICE: Config file "/Users/ryan/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> icos-instance-1
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / 1Fichier
   \ "fichier"
 2 / Alias for an existing remote
   \ "alias"
 3 / Amazon Drive
   \ "amazon cloud drive"
 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
   \ "s3"
 5 / Backblaze B2
   \ "b2"
...

Choose option 3 to get a list of S3 compatible offerings, then choose IBM COS S3

Storage> 4
** See help for s3 backend at: https://rclone.org/s3/ **

Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Amazon Web Services (AWS) S3
   \ "AWS"
 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   \ "Alibaba"
 3 / Ceph Object Storage
   \ "Ceph"
 4 / Digital Ocean Spaces
   \ "DigitalOcean"
 5 / Dreamhost DreamObjects
   \ "Dreamhost"
 6 / IBM COS S3
   \ "IBMCOS"
 7 / Minio Object Storage
   \ "Minio"
 8 / Netease Object Storage (NOS)
   \ "Netease"
 9 / Wasabi Object Storage
   \ "Wasabi"
10 / Any other S3 compatible provider
   \ "Other"

Add your HMAC Access Key and Secret Key

env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> xxxxxxxxxxxxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Use this if unsure. Will use v4 signatures and an empty region.
   \ ""
 2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
   \ "other-v2-signature"
region> 1

Next you will want to chose the IBM Cloud Object Storage endpoint to use and the storage tier for the bucket you will be using. In this instance I am targetting the US-Cross regional endpoint and a standard tier bucket:

Endpoint for IBM COS S3 API.
Specify if using an IBM COS On Premise.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / US Cross Region Endpoint
   \ "s3-api.us-geo.objectstorage.softlayer.net"
 2 / US Cross Region Dallas Endpoint
   \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
 3 / US Cross Region Washington DC Endpoint
   \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
...
endpoint> 1

Location constraint - must match endpoint when using IBM Cloud Public.
For on-prem COS, do not make a selection from this list, hit enter
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / US Cross Region Standard
   \ "us-standard"
 2 / US Cross Region Vault
   \ "us-vault"
 3 / US Cross Region Cold
   \ "us-cold"
 4 / US Cross Region Flex
   \ "us-flex"
...

On the next prompt you will need to specify an ACL policy. I am choosing private:

Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
   \ "private"
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
   \ "public-read"
 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
   \ "public-read-write"
 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
   \ "authenticated-read"
acl> 1

Skip the advanced config and rclone should present you with your new configuration details. Double check that everything is correct and then select n to add your second ICOS instance.

Edit advanced config? (y/n)
y) Yes
n) No
y/n> n
Remote config
--------------------
[icos-instance-1]
type = s3
provider = IBMCOS
env_auth = false
access_key_id = xxxxxx
secret_access_key = xxxxxxxxx
endpoint = s3-api.us-geo.objectstorage.softlayer.net
location_constraint = us-standard
acl = private
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
icos-instance-1      s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q>

Follow the steps again to add your second ICOS instance and when you've verified that everything looks correct choose q to quit the configuration process.

Inspecting our ICOS buckets

With rclone configured we can now start the actual sync process between our buckets, but first I will start by listing the contents of our source and destination buckets:

Source

$ rclone ls icos-instance-1:source-bucket-on-instance-1
    45338 AddNFSAccess.png
    48559 AddingNFSAccess.png
    66750 ChooseGroup.png
     2550 CloudPakApplications.png
     4643 CloudPakAutomation.png
     4553 CloudPakData.png
     5123 CloudPakIntegration.png
     4612 CloudPakMultiCloud.png
    23755 CompletedAddingNFSAccess.png
   174525 CreateNetworkShare1.png
    69836 CreateNetworkShare2.png
    76863 CreateStoragePool.png
    50489 CreateStoragePool1.png
    56297 CreateStoragePool2.png
     2340 applications-icon.svg
     6979 automation-icon.svg
   120584 cloud-paks-leadspace.png
     9255 data-icon.svg

Destination

$ rclone ls icos-instance-2:destination-bucket-on-instance-2
$

Syncing Bucket Objects

In this example I am going to be syncing the contents of the bucket source-bucket-on-instance-1 from my first instance of ICOS to the bucket destination-bucket-on-instance-2 on my second instance of ICOS. The -P flag allows us to see the progress if the sync operation.

$ rclone sync -P icos-instance-1:source-bucket-on-instance-1 icos-instance-2:destination-bucket-on-instance-2
Transferred:      754.933k / 754.933 kBytes, 100%, 151.979 kBytes/s, ETA 0s
Errors:                 0
Checks:                 0 / 0, -
Transferred:           18 / 18, 100%
Elapsed time:        4.9

Now if we look at the destination-bucket-on-instance-2 bucket again we'll see our files have synced over:

$ rclone ls icos-instance-2:destination-bucket-on-instance-2
    45338 AddNFSAccess.png
    48559 AddingNFSAccess.png
    66750 ChooseGroup.png
     2550 CloudPakApplications.png
     4643 CloudPakAutomation.png
     4553 CloudPakData.png
     5123 CloudPakIntegration.png
     4612 CloudPakMultiCloud.png
    23755 CompletedAddingNFSAccess.png
   174525 CreateNetworkShare1.png
    69836 CreateNetworkShare2.png
    76863 CreateStoragePool.png
    50489 CreateStoragePool1.png
    56297 CreateStoragePool2.png
     2340 applications-icon.svg
     6979 automation-icon.svg
   120584 cloud-paks-leadspace.png
     9255 data-icon.svg

Taking it Further

  • Sync Options - The sync operation makes the source and destination identical, and modifies the destination only. Destination is updated to match source, including deleting files if necessary. If you need to modify this default behavior take a look these additional configuration options for the sync command.

  • Automated Sync - If you need to set an automatic sync between buckets you will need to use a scheduling took like Task Scheduler for Windows or crontab for Linux/macOS.

  • Supported rclone commands - The full list of rclone subcommands for interacting with Cloud Object Storage.

Top comments (0)