How do I use IBM Cloud Satellite with Wasabi?

Wasabi has been validated for use with IBM Cloud Satellite. The IBM Cloud Satellite-managed distributed cloud solution delivers cloud services, APIs, access policies, security controls, and compliance. To use Wasabi Object Storage with workload running in IBM Cloud Satellite Locations, follow the instructions below.


Table of Contents

1. Reference Architecture

2. Prerequisites

3. Integration scenarios


1. Reference Architecture



2. Prerequisites


3. Integration scenarios

There are multiple use cases for the IBM Cloud Satellite - Wasabi integration. This Knowledge Base article outlines a few use cases.

To learn more about Satellite use cases in general, please click here.

Base integration

Satellite Locations are mini cloud regions and as such do not need a direct integration to Wasabi cloud object storage in most cases. You can run any application workload that uses object storage and leverages the s3 API function unchanged as long as their API usage does not hardcode AWS (or IBM or MinIO, etc.) endpoints.

Wasabi provides an s3 compatible object storage solution with defined API endpoints. Existing applications that utilize 'boto3' or any of the other S3-compatible API libraries can supply the Wasabi endpoint and user credentials.


Below are examples of some products within IBM’s data portfolio that work OOTB (out of the box) with Wasabi.

Running IBM Cloud Pak for Data and Watson Studio with Wasabi




Running Red Hat Open Data Service on Satellite with a Wasabi data connection.






Satellite Storage Templates

For workloads that depend on mounted filesystems, that utilize object storage in the backend you can use the 'IBM Object Storage plugin' to define the Wasabi endpoints and bucket(s) to be mounted. Within the context of Satellite there is a Satellite Storage Template to ease the use of the IBM Object Storage plugin.

With Satellite storage templates, you can create a storage configuration that can be deployed across your clusters without the need to re-create the configuration for each cluster.

To learn more about the Satellite Storage Templates, click here.

Follow the steps below to create a Satellite Storage Template.

3.1. Login to your IBM Cloud account and click on the Dashboard icon.Screenshot_2023-03-20_at_9.01.43_PM.png


3.2. Click on "Satellite --> Overview".



3.3. On the Satellite Overview page, click on "Locations".



3.4. On the Locations page, click on the desired location.


3.5. On the location configuration page, click "Storage" and "Create storage configuration". 



3.6. In the Basics section, provide a name for the configuration. Select "IBM Object Storage Plugin" as the 'Storage type' in the drop-down. Click "Next".



3.7. In the Parameters section, change the COS plugin license field to "true" to accept the Apache opensource license terms. Select Wasabi as the Object Storage Provider. 



Provide the desired region in the Object Storage region and click "Next".


Note: This configuration example discusses the use of Wasabi's us-east-2 storage region. Please provide the appropriate region for your configuration.

3.8. Click through to the Assignment tab and assign the storage template to the clusters desired.

The result will be the creation of storageclasses within the assigned OpenShift clusters (which are running on our location in this example)

3.9. In the cluster(s) you also need to create a Kubernetes Secret with the associated Wasabi access credentials. We need to create a secret with two key-value pairs: `access-key` and `secret-key` that have the respective Wasabi credentials for the account to use. 

Login to the OpenShift cluster running on the location by clicking on "Services --> <cluster name>".



3.10. To login to the OpenShift web UI, click "OpenShift web console".



3.11. In the OpenShift console, navigate to "Storage --> StorageClasses" on the left hand pane.

Note: These storageclassses are created by Satellite as we created a storage template for this location.



3.12. Navigate to "Workloads --> Secrets". Expand the "Create" dropdown and select "Key/value secret".



3.13. In the Create key/value secret page, provide a name for the secret in the "Secret name" field.

In the "Key" field, enter the term "access-key" and in the "Value" field, enter the access key of your Wasabi account.

Click on "Add key/value" to add the secret key.



3.14. In the "Key" field, enter the term "secret-key", in the "Value" field, enter the secret key of your Wasabi account and click "Create".


We now have a secret created for our OpenShift cluster. 



Here is a prototypical PVC for a workload that wants to mount an S3 bucket into the filesystem:

kind: PersistentVolumeClaim
apiVersion: v1
  name: s3fs-test-pvc
  namespace: wasabi-test
  annotations: "ibmc-s3fs-cos" "true" "false" "davet-test-011" ""    # Bucket's sub-directory to be mounted (OPTIONAL) "us-east-1" "wasabi-cos" ""   # stat-cache-expire time in seconds; default is no expire
    - ReadWriteOnce
      storage: 1Gi # fictitious value

Here is an example pod specification that would use that PVC:

apiVersion: v1
kind: Pod
  name: s3fs-test-pod
  namespace: wasabi-test
  - name: s3fs-test-container
    image: anaudiyal/infinite-loop
    - mountPath: "/mnt/s3fs"
      name: s3fs-test-volume
  - name: s3fs-test-volume
      claimName: s3fs-test-pvc

In this example the bucket `davet-test-011` in Wasabi is mapped to `/mnt/s3fs` (i.e. whatever path is specified). You can verify the filesystem mounting by inspecting the pod.

$ kubectl exec -it s3fs-test-pod -n <NAMESPACE_NAME> bash
root@s3fs-test-pod:/# df -Th | grep s3
s3fs fuse.s3fs 256T 0 256T 0% /mnt/s3fs

root@s3fs-test-pod:/# cd /mnt/s3fs/
root@s3fs-test-pod:/mnt/s3fs# ls

root@s3fs-test-pod:/mnt/s3fs# echo "Sateliite and Wasabi integration" > sample.txt
root@s3fs-test-pod:/mnt/s3fs# ls
root@s3fs-test-pod:/mnt/s3fs# cat sample.txt
Sateliite and Wasabi integration
Have more questions? Submit a request