How do I use Splunk with Wasabi?

Splunk is now certified for use with Wasabi. To use this product with Wasabi, please follow the instructions below. 

Minimum version:

  • Splunk Enterprise 7.2.6 is necessary for Clustered Indexer Deployments
  • Splunk Enterprise for Standalone Indexer Deployments

Documentation Reference:

1. First, a bucket must be created in Wasabi for Smart Store to connect to: in this case, we named it “smartstore”. Enable versioning on the bucket & choose Wasabi region closest to the Splunk deployment.


2. The Cache Manager needs to be enabled on each Indexer that Smart Store will be utilized. These settings should be verified with Splunk in order to have a functional deployment. Too small of a cache will result in pre-mature eviction of buckets to remote storage. Too large of a cache will take up excess space in the local storage. Using this reference of a 100GB per day ingest, we came to a cache size of 1460MB. We considered 1 day of hot storage, 30 Day cache retention, 36 month full retention, replication factor of 2, and a compression factor of 50%

Reference Configs:

For our testing, we assigned this stanza to server.conf in the /opt/splunk/etc/apps/search/local/


#max_cache_size is necessary to enable smart store

max_cache_size = 1460

#hotlist settings protect critical data from eviction, these settings will vary per deployment

hotlist_bloom_filter_recency_hours = 1

hotlist_recency_secs = 60

3. Next we want to add our Wasabi volume information to indexes.conf in order to connect to the remote storage. This is usually at the top of the conf file.

4. For this volume, Provide a name, wasabi (as an example), but the name is irrelevant. It is only a reference within Splunk to the remote storage. Under path, you need to reference the bucket you created.


storageType = remote

path = s3://smartstore/

remote.s3.access_key = <access-key>

remote.s3.secret_key = <secret-key>

remote.s3.endpoint =

remote.s3.auth_region = us-east-1

# While preferred, if versioning is not being utilized add:

# remote.s3.supports_versioning = false

Note that this config example discusses the use of Wasabi's us-east-1 storage region. To use other Wasabi storage regions, please use the appropriate Wasabi service URL as described in this article 

5. Restart Splunk:

./opt/splunk/bin/splunk restart

Create a sample txt file by typing:

echo "hello world" > test01.txt

Lets use Splunk to attempt to push it into Wasabi using the volume we created:

./opt/splunk/bin/splunk cmd splunkd rfs -- putF test01.txt volume:wasabi

If no errors occur, we can list from the cli as well to verify:

./opt/splunk/bin/splunk cmd splunkd rfs -- ls --starts-with volume:wasabi

As a result, we should see:

Size Name
12B test01.txt

We should also be able to see it listed within Wasabi WebUI also.


6. If something is not working, it would be logged in /opt/splunk/var/log/splunk/splunkd-utility.log under the S3Client heading. You can check from the CLI with:

grep S3Client /opt/splunk/var/log/splunk/splunkd-utility.log

7. Now that we have verified connectivity, we can add this remote storage to a provisioned index. In this case, the index was also called wasabi. We need to mount the volume under the wasabi index stanza in indexes.conf

Yet another disclaimer, these settings will vary per deployment and you should check with Splunk before rolling it into production. The key part is the remote path, which references the volume we created as well as the index name. 


coldPath = $SPLUNK_DB/wasabi/colddb

enableDataIntegrityControl = 0

enableTsidxReduction = 0

homePath = $SPLUNK_DB/wasabi/db

maxTotalDataSizeMB = 512000

thawedPath = $SPLUNK_DB/wasabi/thaweddb

remotePath = volume:wasabi/wasabi

hotlist_bloom_filter_recency_hours = 48

hotlist_recency_secs = 86400

Once that is edited, restart Splunk

./opt/splunk/bin/splunk restart

8. Now, the wasabi volume is linked to the specific index. At this point we can begin ingesting data to that index. Once the data rolls from hot -> warm, which is in this case 1 day or 1460MB. But, if we want to force a roll for test purposes we can perform an internal rest call to make it happen.

The format is:

./splunk _internal call /data/indexes/<index_name>/roll-hot-buckets –auth (admin_username):(admin_password)

(if you do not use auth it will just prompt you for credentials)

./opt/splunk/bin/splunk _internal call /data/indexes/wasabi/roll-hot-buckets

9. Once the bucket is rolled to warm we should see it populate in its own folder within our Wasabi bucket.


Now Smart Store is fully enabled for the specific index.

Troubleshooting info: 

Note:  Splunk Smart Store Dashboards are available within Splunk Monitoring Console to check status and errors related to the Smart Store deployment

Have more questions? Submit a request