Best Practices for Splunk on Pure Storage (2023)

  1. last update
  2. save as pdf

Currently reviewing public documentation. PleaseaccessAccess the full scope of the document.

This document covers several best practices for Splunk on Pure Storage. This includes Splunk Classic fabric with hot/warm on Pure FlashArray, cold on Pure FlashArray over FC/iSCSI or FlashBlade over NFS, and Splunk SmartStore fabric with data on Pure FlashBlade over S3 and with Pure Splunk Multisite SmartStore FlashBlade.


Splunk classic architecture

  • Hot/Warm/Cold on Pure FlashArray over FC/iSCSI
  • Cool op Pure FlashBlade through NFS

Arquitectura Splunk SmartStore

  • Hot/Warm Cacheon Pure FlashArray over FC/iSCSI (from DAS)
  • Remote control on Pure FlashBlade via S3

Splunk classic architecture

Raw volumes in FlashArray (warm/warm, cold)

Configuring volumes for Splunk indexers couldn't be easier: Due to the unique capabilities of flash memory and the design of the Purity OE, the following factors are irrelevant or important in FlashArray.




Stripe width and depth


The Purity operating environment automatically distributes data across all drives in the array

level RAID


Pure FlashArray uses RAID-HA, designed to protect against three failure modes unique to flash storage: device failure, bit errors, and performance variability.

smart data placement


The Purity OS environment is designed from the ground up to take advantage of the unique capabilities of flash memory, as they are no longer tied to the disk paradigm, so the "hot" and "cold" location of the disks is irrelevant.

To ease bucket management and enable hot/cold bucket backups, we recommend using separate clean volumes for each indexer's hot/hot, cold, and frozen buckets (if you choose to use frozen on FlashArray).

emmer type

counting volume

Location in indexes.conf


1 volumen FA per indexer

Separate volume sections for hot/warm containers, e.g.
[Volume: hot]
Pad = / caliente / splunk


1 volumen FA per indexer

Individual volume sections for cold containers, such as
[volume: cold]


1 volumen FA per indexer

elkcoldToFronzenDir or coldToFrozenScript in the section

Be sure to mount these FlashArray volumes to the relevant indexers at the same mount point (such as "/hot" or "/cold") for indexes.conf to take effect on the indexers.

Because Pure FlashArray volumes are always thin provisioned, Splunk administrators can provision large volumes to avoid adding additional volumes to accommodate space growth.

Keep all FlashArray volumes the same size for all indexers in the cluster to avoid space imbalances.

Linux mount options

You can mount FlashArray volumes with EXT4 or XFS file systems on Splunk indexers. As buckets age and folders are deleted, the underlying block storage must use TRIM/unmap commands to reclaim space. To do this, you can use the mount discard option, which will send a TRIM command to the FlashArray to free up the space occupied by these directories.

These are the recommended mounting options:

take it off, noatime

If the disposable option is not the preferred option according to your standard practices, be sure to put it inactualRun the command periodically (once a day or once a week) on the mount point to free up space at the FlashArray level.

logical volume manager

It is recommended to use Logical Volume Manager (LVM) at the indexer level to associate FlashArray volumes with volume groups and partition them into logical volumes for hot/warm or cold tiers. This allows you to add heap when the indexer needs more storage for the hot/warm or cold tiers hosted on the Pure FlashArray.

Linux Best Practices

Recommended Linux configuration for FlashArray, including multipath queue configuration, is documented on the Pure Storage support site on the Solutions page.

Cool low on Pure FlashBlade

FlashBlade file system

  • Always create a separate NFS file system per indexer to host the interesting layer.

  • The FlashBlade file system supports the NFSv3 and NFSv4.1 protocols. Choose one of them according to your requirement.

    (Video) Splunk with Pure Storage FlashArray

  • FlashBlade file systems are always thin provisioned, and Splunk administrators can provision large file systems to avoid upgrading them and accommodate space growth.

  • not establisheddifficultParameters that limit the size of the file system, as this restricts the flexibility to add more space if necessary.

  • Keep all NFS file systems in the same size clusters for all indexers to avoid space imbalances.

Linux mount options

Mount the NFS file system on the cold tier index nodes using the following mount options.

  • Select one of the NFS protocols"3" of "4.1" forAddressoptions
    • use alternativenfs4Installation type without version option.
      $ mount -t nfs4-o rw,bg,hard,nointr,tcp,rsize=16384 /cold
  • Always mount the file system with "difficultmount options and do not use "soft" NFS mounts.
  • Do not disable attribute caching.
  • Do not specify the wsize option, as the host will get the default FlashBlade size (512K).
  • To keep these changes during reboot, check them into the /etc/fstab file as shown below. The IP addresses specified below point to the FlashBlade Data VIP. /cold nfs rw,bg,nointr,hard,tcp,vers=3,rsize=16384or10.21.214.200:/splunk-cold01 /cold nfs4 rw,bg,nointr,hard,tcp,大小=16384

Note: Changing the default size from 512K to 16K or 32K may improve read performance.

Splunk does not recommend putting hot/warm tiers on top of NFS. To seeSplunk Documentationmore detail.

Arquitectura Splunk SmartStore

External hot layer on FlashBlade

The minimum version of Purity//FB to run Splunk SmartStore on FlashBlade is 2.3.0.

This includes all of the object-related functionality required to host Splunk SmartStore index data on FlashBlade using the S3 protocol.

remote volume

Volume definitions for external storageindex configuration filePoint to the remote object store where Splunk SmartStore stores hot data. The definition of the external volume is as follows.

[volume:remote_store]store type = external pad = s3://# The following S3 settings are only required if you use access keys and keys remote.s3.access_key =remoto.s3.secret_key =remote.s3.endpoint = http://remote.s3.supports_versioning = false remote.s3.list_objects_version = v2 [splunk_index]remotePath = volumen:remote_store/$_index_namerepFactor = autohomePath =
  • Each remote volume definition can have only one path, which represents a single S3 bucket name

  • External volumes pointing to S3 buckets on the FlashBlade should be limited to indexer clusters or standalone indexers. The same S3 bucket cannot be shared between two separate clusters or indexers.

  • A cluster of indexers or a single indexer can have one or more remote volumes.

  • SmartStore indexes are limited to a single remote volume and cannot be split across multiple remote volumes.

  • All peer nodes in an indexing cluster must use the same SmartStore configuration.

please look at thisarticlefor recommendedindex configuration fileInstallation of Splunk SmartStore on Pure Storage FlashBlade.

Splunk Related Settings

cube size

Splunk has a predefined size for buckets, which can be set in the maxDataSize parameter in indexes.conf as

maxDataSize =| automobile | auto_high_volume

The default is 750 MB "auto" and auto_high_volume is 10 GB on 64-bit systems and 1 GB on 32-bit systems.

(Video) Optimizing, Automating, and Virtualizing Splunk at Any Scale

Splunk's general recommendation for high volume environments is to set the bucket size to auto_high_volume, but for Splunk SmartStore indices the specific recommendation is to use "auto" (750 MB) or lower. This is to avoid timeouts when downloading large cubes from external object storage to the cache.

Recommended configurations:

Maximum data size = Automatic
TSIDX reduction

SmartStore does not support TSIDX shrinking. do not set any parametersenable TsidxReduction"true" for SmartStore indexes.

Recommended configurations:

enableTsidxReduction: Falso
bloom filter

Bloom filters play a key role in SmartStore and reduce the download of tsidx data from the external object store to the cache. do not set any parametersCreate a Bloom FilterThis is false".

Recommended configurations:

create flower filter: true
version control

FlashBlade supports version management recommended by SmartStore to prevent accidental deletion. Splunk data is typically deleted when the configured data retention period has expired. Set this parameter toIncorrectOn S3 storage such as FlashBlade, which supports versioning, Splunk can put delete marks on objects instead of physically deleting them, preventing accidental deletions. If this parameter is set to true (default), Splunk SmartStore deletes all versions of the data permanently when the data expires and cannot be recovered.

Recommended configurations:

remote.s3.supports_versioning = onwaar

To prevent accidental deletion, you must enable version control settings at the FlashBlade repository level at build time, as there is no version control by default. Versioning at the FlashBlade repository level can be left at the default value (none) if protection against accidental deletion is not desired. The following image shows how to enable version control of a repository via the FB GUI.

Best Practices for Splunk on Pure Storage (1)

If your Purity//FB version (below 3.0) does not support online versioning, use the following AWS CLI command to enable repository versioning.

aws s3api put-bucket-version control --bucket--versioning-configuration State=Enabled
space recovery

as parametersremote.s3.supports_versioningSet to false and if versioning is enabled at the FlashBlade bucket level, data is not physically deleted when it expires. Therefore, it is recommended to set a lifecycle policy at the FlashBlade S3 bucket level to physically remove deleted data and reclaim space.

Note that if FlashBlade repository-level versioning is not enabled, butremote.s3.supports_versioningSet to false, any deletion of an object will physically delete the object.

As of Purity//FB 3.1, the lifecycle policy can be configured via the GUI.

To set a policy, select the account under Object Storage and click Buckets. It should open the page with lifecycle rules options.

Best Practices for Splunk on Pure Storage (2)

Click the + sign to the right of Lifecycle Rules and provide a rule name and enter the number of days to keep older versions before physically deleting them. In the following example, we create a rule called rule1 with 3 days to keep the old version. After 3 days, the old version of the object is deleted. Select a "Keep Previous Versions" date based on your request. The minimum time you can set is 1 day.

Do not set the "keep current version" option in the lifecycle policy, as this will kill running objects that are still used by Splunk. If you want to recycle deleted objects, just set "keep previous versions".

Pureza //FB 3.1.x

Best Practices for Splunk on Pure Storage (3)

Purity // FB 3.2.4 and above

Best Practices for Splunk on Pure Storage (4)

For any version of Purity//FB below 3.1, the lifecycle policy can only be set through Python code, not through the GUI.

Below is a sample of Python code that can be used to set the lifecycle policy for a particular bucket in FlashBlade. This code removes all non-current versions (or older versions) of an object (deleted or overwritten), for example after 3 days. update valuedate not currentAccording to your requests.

(Video) Splunk Architecture: A Storage Perspective

import boto3s3 = boto3.resource(service_name='s3', use_ssl=False, aws_access_key_id='', aws_secret_access_key='clave_secreta', endpoint_url='http://')s3.meta。 client.put_bucket_lifecycle_configuration (Cucharón='', LifecycleConfiguration={ 'Reglas': [ { 'ID': 'regla1', 'Filtro': {}, 'Estado': 'Habilitado', 'NoncurrentVersionExpiration': { 'NoncurrentDays': 3 }, } ] } )
Upload/download multiple parts

FlashBlade supports multipart uploads and downloads, the default value of 128 MB should be sufficient and is not recommended unless the new value is proven to improve performance.

list object versions

FlashBlade supports object list version V2, which is higher than V1. To improve Splunk's performance when manipulating objects, V2 is highly recommended.

Recommended configurations:

remoto.s3.list_objects_version = v2
url version

parameterremote_version.s3.urlIt is used to resolve endpoints and communicate with remote storage. This parameter allows the v1 or v2 options.

In v1, the bucket is the first element of the path,

In v2, the repository is the outermost element of the subdomain, e.g.

While FlashBlade can support using either version, we found that using v2 with Splunk output can have unexpected effects, such as not deleting objects or using Splunk command lines withradio frequency systemIt does not work. Therefore, it is recommended not to set parameters that default to v1.

do not set any parametersremote_version.s3.urla v2.

cache management settings

The cache manager plays a vital role in maximizing search efficiency through intelligent management of the local cache. The cache manager tends to keep buckets that have a high probability of participating in future lookups, and when the cache fills up, it removes buckets that are less likely to participate in future lookups. For more information on how CacheManager works, seeSmartStore cache manager.

The CacheManager configuration is generally "global" in scope and is configured in the [cachemanager] section in server.conf. In an indexer cluster environment, settings are configured on each indexer pair.

Other than the "refresh" setting, no other CacheManger setting is applied at the index level.

evacuation policy

Splunk recommends not changing lru's default deletion policy, which deletes the least recently used bucket.

maximum cache size

Specifies the maximum size, in megabytes, of the disk partition that holds the cache. This setting is applied at the indexer level, not the cluster-wide maximum cache size. When cache space exceedsmaximum cache size, or less than the sum ofminimum free spaceInliquidation padding, the cache manager starts deleting data.


The Splunk SmartStore deletion policy generally supports least recently found buckets, which means that the cache manager keeps the most recently found bucket and deletes the least recently found bucket, even if it was created recently.

If most of your searches are for recently recorded data, usehotlist_recentency_secsdomain. This parameter sets the cache retention period based on the bucket age (also known as age) of the active buckets in the cache, and helps protect the most recent buckets over others. This setting overrides the deletion policy.

The age or age of the deposit is determined by the interval between the last time of the deposit and the current time. As the name suggests, this setting is in seconds and the default setting is 86400 seconds or 1 day. The CacheManager does not remove buckets until this setting is reached, unless all other buckets are removed.

The setting can be at the index level or at the global level in the indexes.conf file, but it is recommended that you set this parameter at the index level to protect data on critical indexes rather than non-critical indexes.

(Video) Optimising Splunk Storage on Amazon Web Services

Set this parameter to be taken into account for optimal functionality of cache clearsmaximum cache sizesetting. not stophotlist_recentency_secsThis requires a cache size that exceeds the max_cache_size value, as it affects the cache flush functionality.

For example, if daily ingest adds 100 GB of new buckets each day, a cache size of 500 GB can only hold data from the last five days, so eachhotlist_recentency_secsIf it is more than 5 days, the cache clear will not work optimally. If your searches always fall within the last 30 days and are limited to data processed in the last 30 days, sethotlist_recentency_secsto 2592000 seconds or 30 days and make suremaximum cache sizeDaily recording data can be retained for 30 days or more.

Recommended configurations:

please sethotlist_recentency_secsIndex level parameter for key indexes in indexes.conf to age according to the desired and consistent max_cache_size setting to prevent cached data from being flushed.


andhotlist_recency_secs,hehotlist_bloom_filter_recency_hoursParameters to prevent turning off metadata files like bloomfilter. Using bloom filters during queries prevents downloading of larger cube objects from external object storage, such as raw data records or time series index (tsidx) files.

The default setting is 360 hours or 15 days. With this setting, the cache manager delays deleting smaller files (such as bloom filters) until the difference between the last cache time and the current time exceeds this setting. If the search is limited to data entered in the last n days, set this parameter to the time corresponding to n days for all key indices. If the search is limited to the last 30 days, set this parameter to 720.

Recommended configurations:

please sethotlist_bloom_filter_recency_hoursIndex level parameter for key indexes in indexes.conf to protect smaller cached data metadata files from eviction based on desired age.

Arquitectura SmartStore multisitio de Splunk

Splunk Multisite SmartStore is typically deployed to meet disaster recovery requirements. SplunkMultisite SmartStore on-premises deployment is limited to two sites and each site is hosted in an on-premises data center in active-active mode. This means that the input data can be obtained through these two sites.


1) To host SplunkMultisite data on a FlashBlade, you need two FlashBlades, each in an on-premises datacenter, with a minimum Purity//FB version of 3.3.3, with the "multisite-writable" bucket option. This capability includes all object-related functionality required to host Splunk SmartStore index data on FlashBlades using the S3 protocol, as well as Splunk index data.document.

The minimum version of Purity//FB to run Splunk Multisite SmartStore on FlashBlades is 3.3.3.

to seedocumentConfigure FlashBlades para Splunk Multisite SmartStore.

2) An external VIP or Global Server Load Balancing (GSLB) directs traffic from the peers to the object store (FlashBlade) hosted at the site location. In the event of a site outage or local FlashBlade outage, the VIP or GSLB should be able to redirect the peer's traffic to the FlashBlade at the other site as needed.

remote.s3.endpoint points to the FlashBlade data vip configured via GSLB, so it should point to a URI instead of a hardcoded IP address.

See Splunk'sdocumentFor local Splunk Multisite SmartStore requirements.

remote volume

Volume definitions for external storageindex configuration filePoint to the remote object store where Splunk SmartStore stores hot data. The definition of the external volume is as follows.

[volume:remote_store]storageType = remote# The bucket names of the two FlashBlades must be the same as the storage bucket>remote.s3.secret_key =#Provide via third-party VIP or GSLBremote.s3.endpoint contains object storage URI = http://remote.s3.supports_versioning = false remote.s3.list_objects_version = v2[splunk_index]remotePath = volumen:remote_store/$_index_namerepFactor = autohomePath =
  • Since the two sites have implemented the same indexes.conf file, the following parameters areVolume: remote_storeFor the FlashBlades.
    • The name of the S3 bucket being referencedpadThe parameters must be the same on both FlashBlades.
    • access_key, secret_key must be the same on both FlashBlades.
  • remote.s3.endpoint should point to the FlashBlade URI via a third-party VIP (Virtual IP) or GSLB (Global Server Load Balancing).
    • VIP or GSLB routes traffic from a Splunk indexing node to a FlashBlade hosted at the site location. If a FlashBlade fails, VIP or GSLB redirects traffic to the remaining active FlashBlade as needed.
    • Using a FlashBlade Data vip IP address does not work with multisite
  • See Splunk'sdocumentLearn more about deployment topologies.

replication delay

Depending on the distance between two FlashBlades, there is a replication delay between the FlashBlades because they replicate objects asynchronously. To prevent peer indexers on other sites from loading their copies before object store replication is complete, the following parameters must be updated.

Remote storage load timed out

parameterRemote storage load timed outbelow[grouping]section aboveserver configuration fileFor all site indexersoughtSet a time (in seconds) greater than the maximum replication delay between two FlashBlades.

(Video) Spend Less and Get More out of Splunk with SmartStore

Recommended configurations:

remote_storage_upload_timeout = 600

Note: If you notice a replication delay greater than 600 seconds, you should update this setting to a higher value. If the latency between two FlashBlades is greater than 200 ms, 600 seconds is probably not enough. Please check and update accordingly.


1. Faster Time to Value with ITSI Content Packs
2. Use Case : Measuring Storage I/O Latency
(Splunk How-To)
3. Introduction to Splunk SmartStore solution
(Splunk & Machine Learning)
4. Splunk .conf 2016 - buckets full of happy tiers
(Big Data Beard)
5. Trim the Fat! How to Optimize Splunk License Usage
6. Accelerate Security Analytics At Any Scale With Unified Fast File and Object Storage
(ActualTech Media -)


Top Articles
Latest Posts
Article information

Author: Kareem Mueller DO

Last Updated: 08/23/2023

Views: 6335

Rating: 4.6 / 5 (66 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Kareem Mueller DO

Birthday: 1997-01-04

Address: Apt. 156 12935 Runolfsdottir Mission, Greenfort, MN 74384-6749

Phone: +16704982844747

Job: Corporate Administration Planner

Hobby: Mountain biking, Jewelry making, Stone skipping, Lacemaking, Knife making, Scrapbooking, Letterboxing

Introduction: My name is Kareem Mueller DO, I am a vivacious, super, thoughtful, excited, handsome, beautiful, combative person who loves writing and wants to share my knowledge and understanding with you.