The SAN Guy

tips and tricks from an IT veteran…

Dynamic allocation pool limit has been reached

We were having issues with our backup jobs failing on CIFS share backups using Symantec Netbackup.  The jobs died with a “status 24″, which means it was losing communicaiton with the source.  Our backup administrator provided me with the exact times & dates of the failures and I noticed that immediately preceding his failures this error appeared in the server log on the control station:

2012-08-05 07:09:37: KERNEL: 4: 10: Dynamic allocation pool limit has been reached. Limit=0x30000 Current=0x50920 Max=0x0
 

A quick google search came up with this description of the error:  “The maximum amount of memory (number of 8K pages) allowed for dynamic memory allocation has almost been reached. This indicates that a possible memory leak is in progress and the Data Mover may soon panic. If Max=0(zero) then the system forced panic option is disabled. If Max is not zero then the system will force a panic if dynamic memory allocation reaches this level.”

Based on the fact that the error shows up right before a backup failure I saw the correlation.  To fix it, you’lll need to modify the Heap Limit from the default of 0x00030000 to a larger size.  Here is the command to do that:

.server_config server_2 -v “param kernel mallocHeapLimit=0x40000″ (to change the value)
.server_config server_2 -v “param kernel” (will list the kernel parameters).
 

Below is a list of all the kernel parameters:

Name                                                 Location        Current       Default
----                                                 ----------      ----------    ----------
kernel.AutoconfigDriverFirst                         0x0003b52d30    0x00000000    0x00000000
kernel.BufferCacheHitRatio                           0x0002093108    0x00000050    0x00000050
kernel.MSIXdebug                                     0x0002094714    0x00000001    0x00000001
kernel.MSIXenable                                    0x000209471c    0x00000001    0x00000001
kernel.MSI_NoStop                                    0x0002094710    0x00000001    0x00000001
kernel.MSIenable                                     0x0002094718    0x00000001    0x00000001
kernel.MsiRouting                                    0x0002094724    0x00000001    0x00000001
kernel.WatchDog                                      0x0003aeb4e0    0x00000001    0x00000001
kernel.autoreboot                                    0x0003a0aefc    0x00000258    0x00000258
kernel.bcmTimeoutFix                                 0x0002179920    0x00000002    0x00000002
kernel.buffersWatermarkPercentage                    0x0003ae964c    0x00000021    0x00000021
kernel.bufreclaim                                    0x0003ae9640    0x00000001    0x00000001
kernel.canRunRT                                      0x000208f7a0    0xffffffff    0xffffffff
kernel.dumpcompress                                  0x000208f794    0x00000001    0x00000001
kernel.enableFCFastInit                              0x00022c29d4    0x00000001    0x00000001
kernel.enableWarmReboot                              0x000217ee68    0x00000001    0x00000001
kernel.forceWholeTLBflush                            0x00039d0900    0x00000000    0x00000000
kernel.heapHighWater                                 0x00020930c8    0x00004000    0x00004000
kernel.heapLowWater                                  0x00020930c4    0x00000080    0x00000080
kernel.heapReserve                                   0x00020930c0    0x00022e98    0x00022e98
kernel.highwatermakpercentdirty                      0x00020930e0    0x00000064    0x00000064
kernel.lockstats                                     0x0002093128    0x00000001    0x00000001
kernel.longLivedChunkSize                            0x0003a23ed0    0x00002710    0x00002710
kernel.lowwatermakpercentdirty                       0x0003ae9654    0x00000000    0x00000000
kernel.mallocHeapLimit                               0x0003b5558c    0x00040000    0x00030000  (This is the parameter I changed)
kernel.mallocHeapMaxSize                             0x0003b55588    0x00000000    0x00000000
kernel.maskFcProc                                    0x0002094728    0x00000004    0x00000004
kernel.maxSizeToTryEMM                               0x0003a23f50    0x00000008    0x00000008
kernel.maxStrToBeProc                                0x0003b00f14    0x00000080    0x00000080
kernel.memSearchUsecs                                0x000208fa28    0x000186a0    0x000186a0
kernel.memThrottleMonitor                            0x0002091340    0x00000001    0x00000001
kernel.outerLoop                                     0x0003a0b508    0x00000001    0x00000001
kernel.panicOnClockStall                             0x0003a0cf30    0x00000000    0x00000000
kernel.pciePollingDefault                            0x00020948a0    0x00000001    0x00000001
kernel.percentOfFreeBufsToFreePerIter                0x00020930cc    0x0000000a    0x0000000a
kernel.periodicSyncInterval                          0x00020930e4    0x00000005    0x00000005
kernel.phTimeQuantum                                 0x0003b86e18    0x000003e8    0x000003e8
kernel.priBufCache.ReclaimPolicy                     0x00020930f4    0x00000001    0x00000001
kernel.priBufCache.UsageThreshold                    0x00020930f0    0x00000032    0x00000032
kernel.protect_zero                                  0x0003aeb4e8    0x00000001    0x00000001
kernel.remapChunkSize                                0x0003a23fd0    0x00000080    0x00000080
kernel.remapConfig                                   0x000208fe40    0x00000002    0x00000002
kernel.retryTLBflushIPI                              0x00020885b0    0x00000001    0x00000001
kernel.roundRobbin                                   0x0003a0b504    0x00000001    0x00000001
kernel.setMSRs                                       0x0002088610    0x00000001    0x00000001
kernel.shutdownWdInterval                            0x0002093238    0x0000000f    0x0000000f
kernel.startAP                                       0x0003aeb4e4    0x00000001    0x00000001
kernel.startIdleTime                                 0x0003aeb570    0x00000001    0x00000001
kernel.stream.assert                                 0x0003b00060    0x00000000    0x00000000
kernel.switchStackOnPanic                            0x000208f8e0    0x00000001    0x00000001
kernel.threads.alertOptions                          0x0003a22bf4    0x00000000    0x00000000
kernel.threads.maxBlockedTime                        0x000208f948    0x00000168    0x00000168
kernel.threads.minimumAlertBlockedTime               0x000208f94c    0x000000b4    0x000000b4
kernel.threads.panicIfHung                           0x0003a22bf0    0x00000000    0x00000000
kernel.timerCallbackHistory                          0x000208f780    0x00000001    0x00000001
kernel.timerCallbackTimeLimitMSec                    0x000208f784    0x00000003    0x00000003
kernel.trackIntrStats                                0x000209021c    0x00000001    0x00000001
kernel.usePhyDevName                                 0x0002094720    0x00000001    0x00000001

Using the server_stats command on Celerra / VNX File

Server_stats is a CLI based real time performance monitoring tool from EMC for the Celerra and VNX file.  This post is meant to give a quick overview of the server_stats command with some samples on using it in a scheduled cron job. If you’re looking to dive into using the server_stats feature, I’d suggest using the online manual pages (man server_stats) to get a good idea of all the features and reviewing the “Managing Statistics for VNX” Guide from EMC here:  http://corpusweb130.emc.com/upd_prod_VNX/UPDFinalPDF/en/Statistics.pdf.

I don’t personally use it so I can’t explain how to set it up, but there is an opensource tool that you can use to push server_stats data to graphite called vnx2graphite. You can get it here: http://www.findbestopensource.com/product/fatz-vnx2graphite.   You can download Graphite here: http://graphite.wikidot.com/.

Here is the command line syntax:

server_stats <movername>

   -list
| -info [-all|<statpath_name>[,...]]
| -service { -start [-port <port_number>]

| -stop
| -delete
| -status }

 | -monitor -action {status|enable|disable}
|[

[{ -monitor {statpath_name|statgroup_name}[,...]
| -monitor {statpath_name|statgroup_name}
[-sort <field_name>]
[-order {asc|desc}]
[-lines <lines_of_output>]

 }…]
[-count <count>]
[-interval <seconds>]
[-terminationsummary {no|yes|only}]
[-format {text [-titles {never|once|<repeat_frequency>}]|csv}]
[-type {rate|diff|accu}]
[-file <output_filepath> [-overwrite]]
[-noresolve]
]

Here’s an explanation of a few of the useful table options and what to look for:

Syntax:  server_stats server_2 -i <interval in sec> -c <# of counts> -table <stat>

table cifs 

-Look at uSec/call. The output is in microseconds, divide by 1000 to convert to milliseconds. This tells you how long it takes the celerra to perform specific CIFS operations.

table dvol 

-This is for disk stats.  It shows the write distribution across all volumes.  Look for IO balance across resources.

table fsvol 

-Use this to check filesystem IO.  You’ll be able to monitor which file systems are getting all of the IO with this table.

Start with an interval of 1 first to look for spikes or bursts and then increase it incrementally (10 seconds, 30 seconds, 1 minute, 5 minutes, etc). You can also use Celerra monitor to get Clariion stats.  Look at queueing, cache flushes, etc.  Writes should be through to cache on the Clariion, and unless your write cache is filling up they should be faster than reads.

Here are some sample commands and what they do:

server_stats server_2 -table fsvol -interval 1 -count 10

-This correlates the filesystem to the meta-volumes and shows the % contribution of write requests for each meta-volume (FS Write Reqs %).

server_stats server_2 -table net -interval 1 -count 10

-This shows Network in (KiB/s) / Network In (Pkts/s) to figure out the packet size.  Do this for in and for out to verify the standard MTU size.

server_stats server_2 -summary nfs,cifs -interval 1 -count 10

-This will give a summary of performance stats for nfs and cifs.

Here are some additional sample commands, and how you can add to your crontab to automatically collect performance data:

Collect CIFS and NFS data every 5 minutes:

*/5 * * * * /nas/bin/server_stats server_2 -monitor cifs.smb1,cifs.smb2,nfs.v2,nfs.v3,nfs.v4,cifs.global,nfs.basic -format csv -terminationsummary no -i 5 -c 60 -type accu -file “/nas/quota/slot_2/perfstats/data/server_2/server_2_`date ‘+\%FT\%T’|sed s/://g`” > /dev/null

In the command above the -type accu option tells the command to accumulates statistics upon each capture rather than starting back at a baseline of zero. You can also do ‘diff’ to capture the difference from interval to interval.

Collect diskVol performance stats every 5 minutes:

*/5 * * * * /nas/bin/server_stats server_2 -monitor diskVolumes-std -i 5 -c 60 -file “/nas/quota/slot_2/perfstats/data/server_2/server_2_`date ‘+\%FT\%T’|sed s/://g`” > /dev/null

Collect top_talkers data every 5 minutes:

*/5 * * * * /nas/bin/server_stats server_2 -monitor nfs.client -i 5 -c 60 -file “/nas/quota/slot_2/perfstats/data/server_2/server_2_`date ‘+\%FT\%T’|sed s/://g`” > /dev/null

Below are some useful nfs and cifs stats that you can monitor (pulled from DART 8.1.2-51).  For a full list, run the command server_stats server_2 -i.

cifs.global.basic.totalCalls,cifs.global.basic.reads,cifs.global.basic.readBytes,cifs.global.basic.readAvgSize,cifs.global.basic.writes,cifs.global.basic.writeBytes,
cifs.global.basic.writeAvgSize,cifs.global.usage.currentConnections,cifs.global.usage.currentOpenFiles

nfs.basic,nfs.client,nfs.currentThreads,nfs.export,nfs.filesystem,nfs.group,nfs.maxUsedThreads,nfs.totalCalls,nfs.user,nfs.v2,nfs.v3,nfs.v4,nfs.vdm

 

Celerra Disk Provisioning Wizard incorrectly believes there are not enough drives available for provisioning

I recently had an issue attempting to extend our production NAS file pool on our NS-960.  We had just added Six new 2TB SATA disks to the array, and when I launched the Disk Provisioning Wizard it gave me this error:

“The number of drives available for provisioning additional storage are insufficient”.

That of course wasn’t true, as a 4+2 RAID6 config is indeed supported on this platform and I had just added six drives. I did come up with a workaround to do it manually thanks to some helpful advice from our local EMC technical rep.  I manually created a RAID6 Raid Group in a 4+2 config, and then created a single LUN using all of the available space in the Raid Group (about 7337GB).   Once the LUN is created, you can add it to the Celerrra storage group, in my case it was named “Celerra_hostname”.

When adding the LUN to the storage group, there is a critical step that you must not skip.  The HLU number must be modified!  After you click on a LUN, click add, look for it in the list and notice that the far right column shows the HLU (Host LUN ID).  The LUN you just added will have a blank entry.  It doesn’t look like it’s an editable field, but it is – simply click on the blank area where the number should be and you’ll get a drop down box.  The number you chose must be greater than 15.  Once you’ve modified the HLU for the new LUN, then click on OK to complete the process.

Next, you’ll want to switch back over to the Celerra Management interface, click on the ‘Storage’ tab, then click on the ‘Rescan Storage Systems’ link.  You will get a warning message that states:

“Rescan detects newly available storage and storage systems. Do not rescan unless all primary Data Movers are operating normally. The operation might take a few minutes to complete.”  

Heed the warning and make sure your data movers are up and functional.  You can monitor the progress in the background tasks area.   On my first attempt the Rescan failed.  I got this error message:

“Storage API code=3593: SYMAPI_C_CLARIION_LOAD_ERROR.  An error occurred while data was being loaded from a Clariion.” | “No additional information is available” | “No recommended action is available”.

At the point I got that error I was at the end of my work day and decided to get back to it the next day.  I had planned on opening an SR.  When I re-ran the same scan the next day it worked fine and my production pool auto-extended.  Problem solved.

How to reserve a Celerra / VNX NAS share for a single file type or group of file types

Several years ago I posted on Celerra/VNX NAS file extension filtering (see here), but didn’t write about file system reservations for specific file types, which is also possible.  You can set up NAS shares so that only the file types you want stored there can be written to the share.

In order to do this, first navigate to the \\NAS_Server\C$ administrative share and open the “.filefilter” folder.  You’ll then want to create the following filter files to complete the configuration:

allfiles[@<sharename>][@NetBIOS_name] – this filter file prohibits all file types from being created on the share.  File types that you want excluded from this blanket deny are identified by regular filter files.

noext[@<sharename>][@NetBIOS_name] – this filter file prohibits files with no extensions from being created on the share.  It will prevent a user from saving a file with no filename extension.

<extension_name>[@<sharename>][@NetBIOS_name] – this filter file identifies the types of files you want allowed on the share.  You will need to configure the ACLs to identify which users and/or groups can create files on the share.  File types specified by regular filter files like this one are the exceptions to the allfiles restriction.  If you wanted to reserve a share for only outlook files and message files, you could create two filter files, pst[@<sharename>][@NetBIOS_name] and msg[@<sharename>][@NetBIOS_name], then set appropriate permissions.

How to create a clone of a file system on a Celerra or VNX using nas_copy

Below are the steps used to create a clone of a file system on a Celerra or VNX.  Cloning can be done to the same data mover, a different data mover on the same array, or to a completely different Celerra or VNX data mover.

  • The source file system needs to be mounted read-only, or alternately you can use a checkpointed copy of the file system.  Using a checkpoint is the least disruptive method if the source file system is being used in production.
  • The destination file system must be the same size or larger than the source file system and must also be mounted read-only.
  • If you created the new file system for the clone copy on the same storage pool as the source file system, the copy performance will suffer as you’re reading/writing to the same disks.
  • The ReplicatorV2 license must be enabled for nas_copy to work.

To check the status of your installed licenses, use nas_license -list. The output looks like this:

[nasadmin@celerra01 ~]$ nas_license -list
key status value
site_key online 51 56 2e 69
cifs online
nfs online
iscsi online
snapsure online
replicatorV2 online

  • To enable the replicator license, run this command:

nas_license -create replicatorV2.

  • Once the source file system (or checkpoint, depending on which one you chose) and target file system are mounted correctly, verify the correct interconnect you want to use for the copy. You can review the configured interconnects with this command:

[nasadmin@celerra01 ~]$ nas_cel -interconnect -list

  • To begin the clone, run the nas_copy command, which has the following syntax:

[nasadmin@celerra01 ~]$ nas_copy

-name <sessionName>
     -source
     {-fs {<name> | id=<fsId>} | -ckpt {<ckptName> | id=<ckptId>}
     -destination
     {-fs {id=<dstFsId>|<existing_dstFsName>}
     |-pool {id=<dstStoragePoolId>}|<dstStoragePool>}}
     [-from_base {<ckpt_name>|id=<ckptId>}]
     -interconnect {<name> | id=<interConnectId>}
     [-source_interface {<nameServiceInterfaceName> | ip=<ipaddr>}]
     [-destination_interface {<nameServiceInterfaceName> | ip=<ipaddr>}]
     [-overwrite_destination]
     [-refresh]
     [-full_copy]
     [-background]

  • Below is an example of a valid nas_copy command. This command will create a copy of the source files system on the same Data Mover.

[nasadmin@celerra01 ~]$ nas_copy -name copy_session -source -ckpt checkpoint_of_source_filesystem -destination -fs clone_of_Source_filesystem -interconnect loopback -background

  • You can monitor the progress of the clone copy using nas_replicate -list.  The output of the command looks like this:
Name               Type        Local Mover  Interconnect      Celerra        Status
Site1_VDM01        vdm         server_2     <--Site2_VNX5700  Site1_NS960    Stopped
Site2_Filesystem2  filesystem  server_2     <--Site1_NS960    Site2_VNX5700  OK
Site2_Filesystem3  filesystem  server_2     <--Site1_NS960    Site2_VNX5700  OK
Site2_Filesystem4  filesystem  server_2     <--Site1_NS960    Site2_VNX5700  OK
Site2_Filesystem5  filesystem  server_2     <--Site1_NS960    Site2_VNX5700  OK
.
.
.

Gathering performance data on a virtual windows server

When troubleshooting a potential storage related performance problem on a virtual windows server, it’s a bit more difficult to anaylze a because many virtual hosts share the same LUN for a datastore in ESX.  Using EMC’s analyzer or Control Center Performance Manager only gives me statistics on specific disks or LUNs, I have no visibility into a specific virtual server with those tools.  When this situation arises, I use a windows batch script to gather data with the typeperf command line utility for a specific time period and run it directly on the server.  Typically I’ll let it run for 24 hours and then analyze the data in Excel, where it’s easy to make charts and graphs to get a visual view of what’s going on.

Sometimes the most difficult thing to figure out is the correct syntax for the command and which parameters to use.  For reference, here is the command and it’s parameters:

Syntax:

Typeperf [Path [path ...]] [-cf FileName] [-f {csv|tsv|bin}] [-si interval] [-o FileName] [-q [object]] [-qx [object]] [-sc samples] [-config FileName] [-s computer_name]

Parameters:

-c { Path [ path ... ] | -cf   FileName } : Specifies the performance counter path to log. To list multiple counter paths, separate each command path by a space.
-cf FileName : Specifies the file name of the file that contains the counter paths that you want to monitor, one per line.
-f { csv | tsv | bin } : Specifies the output file format. File formats are csv (comma-delimited), tsv (tab-delimited), and bin (binary). Default format is csv.
-si interval [ mm: ] ss   : Specifies the time between samples, in the [mm:] ss format. Default is one second.
-o FileName   : Specifies the pathname of the output file. Defaults to stdout.
-q [ object ] : Displays and queries available counters without instances. To display counters for one object, include the object name.
-qx [ object ] : Displays and queries all available counters with instances. To display counters for one object, include the object name.
-sc samples : Specifies the number of samples to collect. Default is to sample until you press CTRL+C.
-config FileName : Specifies the pathname of the settings file that contains command line parameters.
-s computer_name : Specifies the system to monitor if no server is specified in the counter path.
/? : Displays help at the command prompt.

EMC’s Analyzer vs. Windows Perfmon Metrics

I tend to look at Response time, disk queue length, Total/Read/Write IO, and Service time first.   I dive into how to interpret many of the SAN performance metrics in my older post here:

http://emcsan.wordpress.com/2012/09/25/san-performance-metrics/

The counters you’ll choose in Windows performance monitor don’t precisely line up with what we commonly look at using EMC’s tools in how they are named, and in addition you can choose ‘LogicalDisk’ and ‘PhysicalDisk’ when selecting the counters.

What is the difference between the Physical Disk vs. Logical Disk performance objects in Perfmon, and why monitor both? Their counters are calculated the same way but their scope is different. I generally use both “\LogicalDisk(*)\” and “\PhysicalDisk(*)\” when I run my perfmon script.

The Physical Disk performance object monitors disk drives on the computer. It identifies the instances representing the physical hardware, and the counters are the sum of the access to all partitions on the physical instance.

The Logical Disk Performance object monitors logical partitions. Performance monitor identifies logical disks by their drive letter or mount point. If a physical disk contains multiple partitions, this counter will report the values just for the partition selected and not for the entire disk. On the other hand, when using Dynamic Disks the logical volumes may span more than one physical disk, in this scenario the counter values will include the access to the logical disk in all the physical disks it spans.

Here are the performance monitor counters that I frequently use, and how they compare to EMC’s navisphere analyzer (or ECC):

“\LogicalDisk(*)\Avg. Disk Queue Length” – (Named the same as EMC) The average number of outstanding requests when the disk was busy
“\LogicalDisk(*)\%% Disk Time” – (No direct EMC equivalent) The “% Disk Time” counter is the “Avg. Disk Queue Length” counter multiplied by 100. It is the same value displayed in a different scale.
“\LogicalDisk(*)\Disk Transfers/sec” – Total Throughput (IO/sec) – the total number of individual disk IO requests completed over a period of one second.  We’ll use this value to help determine Disk Service Time.
“\LogicalDisk(*)\Disk Reads/sec” – Read Throughput (IO/sec)
“\LogicalDisk(*)\Disk Writes/sec” – Write Throughput (IO/sec)
“\LogicalDisk(*)\%% Idle Time” –  (No direct EMC equivalent) This counter provides a very precise measurement of how much time the disk remained in idle state, meaning all the requests from the operating system to the disk have been completed and there are zero pending requests. We’ll also use this to calculate disk service time.
“\LogicalDisk(*)\Avg. Disk sec/Transfer” – Response time (sec) – EMC uses milliseconds, windows uses seconds, so you’ll see 8ms represented as .008 in the results.
“\LogicalDisk(*)\Avg. Disk sec/Read” – Response times for read IO
“\LogicalDisk(*)\Avg. Disk sec/Write” – Response times for write IO

Disk Service Time is caculated with this formula:  Disk Utilization = 100 – %Idle Time, then Disk Utilization  /  Disk Transfers/Sec. = Disk Service Time.

Configuring the Script

This batch script collects all of the relevant data for disk activity.  After 24 hours, it will dump the data into a csv file.  The length of time is controller by the combination of the “-sc” and “-si” parameters.  To collect data in one minute intervals for 24 hours, you’d set si to 60 (collect data every 60 seconds), and sc to 1440 (1440 minutes = 24 hours). To collect data every one minute for 30 minutes, you’d enter “-si 60 -sc 30″.  This script assumes you have a local directory on the C: Drive named ‘Collection’.

@Echo Off
cd c:\Collection
@For /F “tokens=1,2,3,4 delims=/ ” %%A in (‘Date /t’) do @(
Set All=%%A%%B%%C%%D
)
@For /F “tokens=1,2,3 delims=: ” %%A in (‘Time /t’) do @(
Set Allm=%%A%%B%%C
)
typeperf “\LogicalDisk(*)\Avg. Disk Queue Length” “\LogicalDisk(*)\%% Disk Time” “\LogicalDisk(*)\Disk Transfers/sec” “\LogicalDisk(*)\Disk Reads/sec” “\LogicalDisk(*)\Disk Writes/sec” “\LogicalDisk(*)\%% Idle Time” “\LogicalDisk(*)\Avg. Disk sec/Transfer” “\LogicalDisk(*)\Avg. Disk sec/Read” “\LogicalDisk(*)\Avg. Disk sec/Write” “\PhysicalDisk(*)\Avg. Disk Queue Length” “\PhysicalDisk(*)\%% Disk Time” “\PhysicalDisk(*)\Disk Transfers/sec” “\PhysicalDisk(*)\Disk Reads/sec” “\PhysicalDisk(*)\Disk Writes/sec” “\PhysicalDisk(*)\%% Idle Time” “\PhysicalDisk(*)\Avg. Disk sec/Transfer” “\PhysicalDisk(*)\Avg. Disk sec/Read” “\PhysicalDisk(*)\Avg. Disk sec/Write”  -si 60 -sc 1440 -o PerfCounters-%All%-%Allm%.csv

Celerra / VNX File replication job creation errors when selecting destination

I recently had an issue with setting up a new Celerra file system replication job. As soon as I selected the destination system I received the three errors below.

1) Query VDMs All. Cannot access any Data Mover on the remote system, hostname

Severity: Error
Brief Description: Cannot access any Data Mover on the remote system, hostname
Full Description: No Data Movers are available on the specified remote system to perform this operation
Recommended Action: 1) Check if the Data Movers on the specified remote system are accessible. 2) Ensure that the difference in system times between the local and remote Celerra systems or VNX systems does not exceed 10 minutes. Use NTP on the Control Stations to synchronize the system clocks. 3) Ensure that the passphrase on the local Control Station matches with the passphrase on the remote Control Station. 4) Ensure that the same local users that manage VNX for file systems exist on both the source and the destination Control Station. 5) Ensure that the global account is mapped to the same local account on both local and remote VNX Control Stations. Primus emc263860 provides more details.
Message ID: 13690601568

2) Query storage pools All. Execution failed: Segmentation fault: Operating system signal. [STRING.make_from_string]

Severity: Error
Brief Description: Execution failed: Segmentation fault: Operating system signal. [STRING.make_from_string]
Full Description: Operation failed for the reason described in the accompanying message.
Recommended Action: Correct the cause of the problem and repeat the operation.
Message ID: 13421840573

3) There are no destination pools available.

Severity: Info
Brief Description: There are no destination pools available.
Full Description: Destination side storage pools are not available.
Recommended Action: Check whether the storage pools have enough space.
Message ID: 26845970450

I was unable to determine the cause of the problem so I opened an SR with EMC.

It turns out there was a user discrepancy between the /etc/passwd file and the /nas/site/user_db file. This was causing the following error when checking the interconnect:

[nasadmin@celerra02 log]$ nas_cel -interconnect -l
Error 2237: Execution failed: Segmentation fault: Operating system signal. [STRING.make_from_string]

The output should look something like this:

[nasadmin@celerra02 log]$ nas_cel -interconnect -l
id    name          source_server destination_system destination_server
20001 loopback      server_2      DRSITE1            server_2
20003 SITE1VNX5500  server_2      DRSITE1            server_2
20004 SITE2NS960    server_2      DRSITE2            server_2
20007 SITE11NS960   server_2      DRSITE1            server_2
20005 SITE3VNX5500  server_2      DRSITE1            server_2
20006 SITE4VNX5500  server_2      DRSITE1            server_2
20008 SITE5NS120    server_2      DRSITE1            server_2
40001 loopback      server_4      DRSITE1            server_4
40003 SITE2NS40     server_4      DRSITE2            server_2

The problem was resolved by removing entries from /nas/site/user_db that were not in the /etc/passwd file. This was caused by a manual modification of the passwd file by a sysadmin, some old entries had been removed and the matching changes were not done in the user_db file.

 

Scripting automatic reports for Isilon from the CLI

We recently set up a virtual demo of an Isilon system on our network as we are evaluating Isilon for a possible purchase.  You can obtain a virtual node that runs on ESX from your local EMC Isilon representative, along with temporary licenses to test everything out.  As part of the test I wanted to see if it was possible to create custom CLI scripts in the same way that I create them on the Celerra or VNX File.  On the Celerra, I run daily scripts that output file pool sizes, file systems & disk space, failover status, logs, health check info, checkpoint info, etc. to my web report page.  Can you do the same thing on an Isilon?

Well, to start with, Isilon’s commands are completely different.  The first step of course was to look at what was available to me.  All of the Isilon administration commands appear to begin with ‘isi’.  If you type isi by itself it will show you the basic list of commands and what they do:

isilon01-1% isi
Description:
OneFS cluster administration.
Usage:
isi  <subcommand>
[--timeout <integer>]
[{--help | -h}]
Subcommands:
Cluster Monitoring:
alert*           An alias for "isi events".
audit            Manage audit configuration.
events*          Manage cluster events.
perfstat*        View cluster performance statistics.
stat*            An alias for "isi status".
statistics*      View general cluster statistics.
status*          View cluster status.
Cluster Configuration:
config*          Manage general cluster settings.
email*           Configure email settings.
job              Isilon job management commands.
license*         Manage software licenses.
networks*        Manage network settings.
services*        Manage cluster services.
update*          Update OneFS system software.
pkg*             Manage OneFS system software patches.
version*         View system version information.
remotesupport    Manage remote support settings.
Hardware & Devices:
batterystatus*   View battery status.
devices*         Manage cluster hardware devices.
fc*              Manage Fibre Channel settings.
firmware*        Manage system firmware.
lun*             Manage iSCSI logical units (LUNs).
target*          Manage iSCSI targets.
readonly*        Toggle node read-write state.
servicelight*    Toggle node service light.
tape*            Manage tape and media changer devices.
File System Configuration:
get*             View file system object properties.
set*             Manage file system object settings.
quota            Manage SmartQuotas, notifications and reports.
smartlock*       Manage SmartLock settings.
domain*          Manage file system domains.
worm*            Manage SmartLock WORM settings.
dedupe           Manage Dedupe settings.
Access Management:
auth             Manage authentication, identities and role-based access.
zone             Manage access zones.
Data Protection:
avscan*          Manage antivirus service settings.
ndmp*            Manage backup (NDMP) settings.
snapshot         Manage SnapshotIQ file system snapshots and settings.
sync             SyncIQ management interface.
Protocols:
ftp*             Manage FTP settings.
hdfs*            Manage HDFS settings.
iscsi*           Manage iSCSI settings.
nfs              Manage NFS exports and protocol settings.
smb              Manage SMB shares and protocol settings.
snmp*            Manage SNMP settings.
Utilities:
exttools*        External tools.
Other Subcommands:
filepool         Manage filepools on the cluster.
storagepool      Configure and monitor storage pools
Options:
Display Options:
--timeout <integer>
Number of seconds for a command timeout.
--help | -h
Display help for this command.

Noe that subcommands or actions that are marked with an asterisk(*) require root login.  If you log in with a normal admin account you’ll get the following error message when you run the command:

isilon01-1% isi status
Commands not enabled for role-based administration require root user access.

Isilon’s OneFS operating system is based on FreeBSD as opposed to Linux for the Celerra/VNX DART OS.   Since it’s a unix based OS with console access, you can create shell scripts and cron job schedules just like the Celerra/VNX File.

Because all of the commands I want to run require root access, I had to create the scripts logged in as root.  Be careful doing this!  This is only a test for me, I would likely look for a workaround for a prod system.    Knowing that I’m going to be FTPing the output files to my web report server, I started by creating the .netrc file in the /root folder.  This is where I store the default login and password for the FTP server. Permissions must be changed for it to work, use chmod 600 on the file after you create it.  It didn’t work for me at first as the syntax is different on FreeBSD than on Linux, so looking at my Celerra notes didn’t help  (For Celerra/VNX File I used “machine <ftp_server_name> login <ftp_login_id> password <ftp_password>”).

For Isilon, the correct syntax is this:

default login <ftp_username> password <ftp_password>
 

The script that FTP’s the files would then look like this for Isilon:

HOST=”10.1.1.1″
ftp $HOST <<SCRIPT
put /root/isilon_stats.txt
put /root/isilon_repl.txt
put /root/isilon_df.txt
put /root/isilon_dedupe.txt
put /root/isilon_perf.txt
SCRIPT
 

For this demo, I created a script that generates reports for File system utilization, Deduplication Status, Performance Stats, Replication stats, and Array Status.  For the Filesystem utilization report, I used two different methods as I wasn’t sure which I’d like better.   Using ‘df –h –a /ifs’ will get you similar information to ‘isi storagepool list’, but the output format is different.   I used cron to schedule the job directly on the Isilon.

Here is the reporting script:

TODAY=$(date)
HOST=$(hostname)
sleep 15
echo “———————————————————————————” > /root/isilon_stats.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_stats.txt
echo “———————————————————————————” >> /root/isilon_stats.txt
/usr/bin/isi status >> /root/isilon_stats.txt
echo “———————————————————————————” > /root/isilon_repl.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_repl.txt
echo “———————————————————————————” >> /root/isilon_repl.txt
/usr/bin/isi sync reports list >> /root/isilon_repl.txt
echo “———————————————————————————” > /root/isilon_df.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_df.txt
echo “———————————————————————————” >> /root/isilon_df.txt
df -h -a /ifs >> /root/isilon_df.txt
echo ” ” >> /root/isilon_df.txt
/usr/bin/isi storagepool list >> /root/isilon_df.txt
echo “———————————————————————————” > /root/isilon_dedupe.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_dedupe.txt
echo “———————————————————————————” >> /root/isilon_dedupe.txt
/usr/bin/isi dedupe stats  >> /root/isilon_dedupe.txt
echo “———————————————————————————” > /root/isilon_perf.txt
echo “Date: $TODAY  Host:  $HOST” >> /root/isilon_perf.txt
echo “———————————————————————————” >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–System Stats–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics system  >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–Client Stats–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics client  >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–Protocol Stats–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics protocol  >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–Protocol Data–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics pstat  >> /root/isilon_perf.txt
sleep 1
echo ”  ” >> /root/isilon_perf.txt
echo “–Drive Stats–” >> /root/isilon_perf.txt
echo ”  ” >> /root/isilon_perf.txt
/usr/bin/isi statistics drive  >> /root/isilon_perf.txt
 

Once the ouput files are FTP’d to the web server, I have a basic HTML page that uses iframes to show the text files.  The web page is then automatically updated as soon as the new text files are FTP’d.  Below is a screenshot of my demo report page.  It doesn’t show the entire page, but you’ll get the idea.

IsilonReports

Easy reporting on data domain using the autosupport log

I was looking for an easy (and free) way to do some daily reporting on our data domain hardware.  I was most interested in reporting on disk space, but decided to gather some other data as well.  The easiest method I found is to use the info collected in the autosupport log.  First, I’ll explain how to automatically gather the autosupport log from your data domain, and then move on to how to pull only the information you want from it for reporting purposes.

First, you need to enable ftp on the data domain and add the IP address of the FTP server you’re going to use to pull the file.  Here are the commands you need to run on your data domain:

adminaccess enable ftp
adminaccess add ftp 10.1.1.1 
 

Next, you’ll want to pull the files from the data domain.  I am using a windows FTP server to pull the autosupport files.  The windows batch script I use is scheduled to run once a day and is listed below.  I use the ftp command and call a text file with the specific IP and login info for each data domain box.

ftp -s:10.1.1.2.txt
ftp -s:10.1.1.3.txt
ftp -s:10.2.1.2.txt
ftp -s:10.2.1.3.txt
 

The text files that the ftp script calls (that are named <ipaddress.txt>) look like this:

open 10.1.1.2
sysadmin
<password>
cd support
lcd e:\reports\dd\SITE1_DD_01
get autosupport
quit
 

Once you’ve downloaded the autosupport file, you can run scripts against it to pull out only the info you’re interested in.  I prefer to use unix shell scripts for parsing files because of the extra functionality over standard windows batch scripts.  I have Cygwin installed on my web server to run these shell scripts.

If you take a look at the autosupport log, you’ll notice that it’s very long and contains some duplicate characters in multiple areas, which makes using grep, awk, and sed a bit more challenging.  It took a bit of trial and error, but the scripts below will pull only the specific areas I wanted:  System Alerts, File system compression, Locked files, Replication info, File distribution, and Disk space information.

The first script below will gather disk space information using grep.  Spacing inside the quotes is very important in this case, as I was looking for specific unique strings within the autosupport log, and using grep for these unique strings will produce the correct output.  In this example, I am gathering info for four data domain boxes. For those unfamiliar with Cygwin, you can access windows drive letters by navigating to /cygdrive/<drive letter>.  To view the root of the C drive, you would type ‘cd /cygdrive/c’ from the CLI.  In this case, my windows batch script that pulls the autosupport files places them in the e:\reports\dd folder, which would be /cygdrive/e/reports/dd folder when using the cygwill shell.

Here is the script:

# Site 1 Data Domain 01
cat /cygdrive/e/reports/dd/SITE1_DD_01/autosupport | grep “=  SERVE” > /cygdrive/e/reports/dd/SITE1_DD_01/SITE1_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_01/autosupport | grep “Active Tier:” >> /cygdrive/e/reports/dd/SITE1_DD_01/SITE1_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_01/autosupport | grep “Resource           Size” >> /cygdrive/e/reports/dd/SITE1_DD_01/SITE1_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_01/autosupport | grep “/data:” >> /cygdrive/e/reports/dd/SITE1_DD_01/SITE1_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_01/autosupport | grep “/ddvar     ” >> /cygdrive/e/reports/dd/SITE1_DD_01/SITE1_DD_01_diskspace.txt
 
# Site 1 Data Domain 02
cat /cygdrive/e/reports/dd/SITE1_DD_02/autosupport | grep “=  SERVE” > /cygdrive/e/reports/dd/SITE1_DD_02/SITE1_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_02/autosupport | grep “Active Tier:” >> /cygdrive/e/reports/dd/SITE1_DD_02/SITE1_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_02/autosupport | grep “Resource           Size” >> /cygdrive/e/reports/dd/SITE1_DD_02/SITE1_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_02/autosupport | grep “/data:” >> /cygdrive/e/reports/dd/SITE1_DD_02/SITE1_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/SITE1_DD_02/autosupport | grep “/ddvar     ” >> /cygdrive/e/reports/dd/SITE1_DD_02/SITE1_DD_02_diskspace.txt
 
# Site 2 Data Domain 01
cat /cygdrive/e/reports/dd/Site2_DD_01/autosupport | grep “=  SERVE” > /cygdrive/e/reports/dd/Site2_DD_01/Site2_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_01/autosupport | grep “Active Tier:” >> /cygdrive/e/reports/dd/Site2_DD_01/Site2_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_01/autosupport | grep “Resource           Size” >> /cygdrive/e/reports/dd/Site2_DD_01/Site2_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_01/autosupport | grep “/data:” >> /cygdrive/e/reports/dd/Site2_DD_01/Site2_DD_01_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_01/autosupport | grep “/ddvar     ” >> /cygdrive/e/reports/dd/Site2_DD_01/Site2_DD_01_diskspace.txt
 
# Site 2 Data Domain 02
cat /cygdrive/e/reports/dd/Site2_DD_02/autosupport | grep “=  SERVE” > /cygdrive/e/reports/dd/Site2_DD_02/Site2_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_02/autosupport | grep “Active Tier:” >> /cygdrive/e/reports/dd/Site2_DD_02/Site2_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_02/autosupport | grep “Resource           Size” >> /cygdrive/e/reports/dd/Site2_DD_02/Site2_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_02/autosupport | grep “/data:” >> /cygdrive/e/reports/dd/Site2_DD_02/Site2_DD_02_diskspace.txt
cat /cygdrive/e/reports/dd/Site2_DD_02/autosupport | grep “/ddvar     ” >> /cygdrive/e/reports/dd/Site2_DD_02/Site2_DD_02_diskspace.txt
 
# Copy the reports to the web server directory.
cp /cygdrive/e/reports/dd/SITE1_DD_01/SITE1_DD_01_diskspace.txt /cygdrive/c/inetpub/wwwroot/
cp /cygdrive/e/reports/dd/SITE1_DD_02/SITE1_DD_02_diskspace.txt /cygdrive/c/inetpub/wwwroot/
cp /cygdrive/e/reports/dd/Site2_DD_01/Site2_DD_01_diskspace.txt /cygdrive/c/inetpub/wwwroot/
cp /cygdrive/e/reports/dd/Site2_DD_02/Site2_DD_02_diskspace.txt /cygdrive/c/inetpub/wwwroot/
 

For the remaining reports, I used the ‘sed’ command rather than ‘grep’.  As an example, I’ll explain how I stripped out the information for the System Alerts section.  In order to strip out it out, I use sed to start cutting text when it reaches ‘Current Alerts’, and stop cutting text when it reaches ‘There are’.   This same process is repeated to pull the information for the other reports as well (alerts, compression, locked files, replication, distribution).

This is what the Current Alerts section looks like in the autosupport log:

Current Alerts
--------------
Alert Id   Time                       Severity   Class               Object          Message                                                             
--------   ------------------------   --------   -----------------   -------------   ---------------------------------------------------------------------
774        Wed Dec 19 11:32:19 2012   CRITICAL   SystemMaintenance                   Core dump capability is now disabled due to lack of space in /ddvar.
807        Sun Jan 20 09:07:18 2013   WARNING    Replication         context=3       repl ctx 3: Sync-as-of time is more than 461 hours ago.             
808        Sun Jan 20 09:07:18 2013   WARNING    Replication         context=4       repl ctx 4: Sync-as-of time is more than 461 hours ago.             
809        Sun Jan 20 09:07:19 2013   WARNING    Replication         context=5       repl ctx 5: Sync-as-of time is more than 461 hours ago.              
814        Mon Jan 28 23:20:45 2013   CRITICAL   Filesystem          FilesysType=2   Space usage in Data Collection has exceeded 95% threshold.          
820        Wed Jan 30 15:52:40 2013   WARNING    Replication         context=2       repl ctx 2: Sync-as-of time is more than 148 hours ago.             
824        Mon Feb  4 12:36:39 2013   WARNING    Replication         context=10      repl ctx 10: Sync-as-of time is more than 24 hours ago.             
825        Wed Feb  6 02:13:51 2013   WARNING    Replication         context=19      repl ctx 19: Sync-as-of time is more than 24 hours ago.             
--------   ------------------------   --------   -----------------   -------------   ---------------------------------------------------------------------
There are 8 active alerts.
 

In this case, sed searches for ‘Current Alerts’ and copies all of the text until it reaches the string ‘There are’, at which point it stops and outputs the file.  The output files are written directly to the wwwroot folder on my internal reporting web page.  Each page encapsulates the output files in an iframe, so the web page is automatically updated every morning when the script runs.

Here is the second script that captures the remaining data:

#For Alerts
sed -n ‘/Current Alerts/,/There are/p’ /cygdrive/e/reports/dd/SITE1_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_01_alerts.txt
sed -n ‘/Current Alerts/,/There are/p’ /cygdrive/e/reports/dd/SITE1_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_02_alerts.txt
sed -n ‘/Current Alerts/,/There are/p’ /cygdrive/e/reports/dd/Site2_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_01_alerts.txt
sed -n ‘/Current Alerts/,/There are/p’ /cygdrive/e/reports/dd/Site2_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_02_alerts.txt
 
#For Filesystem Compression
sed -n ‘/Filesys Compression/,/((Pre-/p’ /cygdrive/e/reports/dd/SITE1_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_01_filecomp.txt
sed -n ‘/Filesys Compression/,/((Pre-/p’ /cygdrive/e/reports/dd/SITE1_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_02_filecomp.txt
sed -n ‘/Filesys Compression/,/((Pre-/p’ /cygdrive/e/reports/dd/Site2_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_01_filecomp.txt
sed -n ‘/Filesys Compression/,/((Pre-/p’ /cygdrive/e/reports/dd/Site2_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_02_filecomp.txt
 
#For Locked Files:
sed -n ‘/Locked files/,/Active/p’ /cygdrive/e/reports/dd/SITE1_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_01_lockedfiles.txt
sed -n ‘/Locked files/,/Active/p’ /cygdrive/e/reports/dd/SITE1_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_02_lockedfiles.txt
sed -n ‘/Locked files/,/Active/p’ /cygdrive/e/reports/dd/Site2_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_01_lockedfiles.txt
sed -n ‘/Locked files/,/Active/p’ /cygdrive/e/reports/dd/Site2_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_02_lockedfiles.txt
 
#Repl Transferred over 24 hrs
sed -n ‘/Replication Data Transferred/,/(sum)/p’ /cygdrive/e/reports/dd/SITE1_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_01_repl.txt
sed -n ‘/Replication Data Transferred/,/(sum)/p’ /cygdrive/e/reports/dd/SITE1_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_02_repl.txt
sed -n ‘/Replication Data Transferred/,/(sum)/p’ /cygdrive/e/reports/dd/Site2_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_01_repl.txt
sed -n ‘/Replication Data Transferred/,/(sum)/p’ /cygdrive/e/reports/dd/Site2_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_02_repl.txt
 
#File Distribution
sed -n ‘/File Distribution/,/> 500 GiB/p’ /cygdrive/e/reports/dd/SITE1_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_01_filedist.txt
sed -n ‘/File Distribution/,/> 500 GiB/p’ /cygdrive/e/reports/dd/SITE1_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/SITE1_DD_02_filedist.txt
sed -n ‘/File Distribution/,/> 500 GiB/p’ /cygdrive/e/reports/dd/Site2_DD_01/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_01_filedist.txt
sed -n ‘/File Distribution/,/> 500 GiB/p’ /cygdrive/e/reports/dd/Site2_DD_02/autosupport > /cygdrive/c/inetpub/wwwroot/Site2_DD_02_filedist.txt
 

When all is said and done, the report output looks like this:

Alerts:

Current Alerts
--------------
Alert Id   Time                       Severity   Class         Object          Message                                                   
--------   ------------------------   --------   -----------   -------------   ----------------------------------------------------------
1157       Wed Feb  5 12:23:19 2014   WARNING    Replication   context=3       Sync-as-of time is more than 24 hours ago.                
1161       Sun Feb  9 08:33:48 2014   CRITICAL   Filesystem    FilesysType=2   Space usage in Data Collection has exceeded 95% threshold.
--------   ------------------------   --------   -----------   -------------   ----------------------------------------------------------
There are 2 active alerts.

Filesystem Compression:

Filesys Compression
--------------                  
From: 2014-03-06 05:00 To: 2014-03-13 06:00

                  Pre-Comp   Post-Comp   Global-Comp   Local-Comp      Total-Comp
                     (GiB)       (GiB)        Factor       Factor          Factor
                                                                    (Reduction %)
---------------   --------   ---------   -----------   ----------   -------------
Currently Used:   495122.5    122948.5             -            -     4.0x (75.2)
Written:*                                                                        
  Last 7 days       9204.2      5656.6          1.1x         1.5x     1.6x (38.5)
  Last 24 hrs         74.1        36.4          1.0x         2.0x     2.0x (50.9)
---------------   --------   ---------   -----------   ----------   -------------
 * Does not include the effects of pre-comp file deletes/truncates
   since the last cleaning on 2014/03/11 02:07:50.

Replications:

Replication Data Transferred over 24hr
--------------------------------------
Directory/MTree Replication:
Date         Time         CTX   Pre-Comp (KB)   Pre-Comp (KB)           Replicated (KB)   Low-bw-   Sync-as-of      
                                      Written       Remaining    Pre-Comp       Network     optim   Time            
----------   --------   -----   -------------   -------------   -----------------------   -------   ----------------
2014/03/13   06:36:18       2     104,186,675      86,628,349   55,522,200   37,756,047      1.00   Thu Mar 13 05:53
                            3               0               0            0            0      0.00   Tue Feb  4 12:00
                            4               0               0            0        4,882      0.00   Thu Mar 13 06:00
                            8               0               0            0        5,120      0.00   Thu Mar 13 06:00
                           29               0               0            0        5,098      0.00   Thu Mar 13 06:00
                        (sum)     104,186,675                   55,522,200   37,771,148      1.00  

File Distribution:

File Distribution
-----------------
169,432 files in 8,691 directories

                          Count                         Space
               -----------------------------   --------------------------
         Age         Files       %    cumul%        GiB       %    cumul%
   ---------   -----------   -----   -------   --------   -----   -------
       1 day            13     0.0       0.0       74.4     0.0       0.0
      1 week           223     0.1       0.1     2133.7     0.4       0.4
     2 weeks         5,103     3.0       3.2    39121.2     7.8       8.3
     1 month        12,819     7.6      10.7    79336.5    15.9      24.1
    2 months        21,853    12.9      23.6   153873.8    30.8      54.9
    3 months         5,743     3.4      27.0    45154.6     9.0      64.0
    6 months         3,402     2.0      29.0    13674.3     2.7      66.7
      1 year        17,035    10.1      39.1    72937.7    14.6      81.3
    > 1 year       103,241    60.9     100.0    93461.7    18.7     100.0
   ---------   -----------   -----   -------   --------   -----   -------

                          Count                         Space
               -----------------------------   --------------------------
        Size         Files       %    cumul%        GiB       %    cumul%
   ---------   -----------   -----   -------   --------   -----   -------
       1 KiB        11,257     6.6       6.6        0.0     0.0       0.0
      10 KiB        44,396    26.2      32.8        0.3     0.0       0.0
     100 KiB        38,488    22.7      55.6        1.3     0.0       0.0
     500 KiB         9,652     5.7      61.3        2.1     0.0       0.0
       1 MiB        27,460    16.2      77.5       70.4     0.0       0.0
       5 MiB        12,136     7.2      84.6       27.3     0.0       0.0
      10 MiB         3,861     2.3      86.9       26.9     0.0       0.0
      50 MiB         4,367     2.6      89.5       96.6     0.0       0.0
     100 MiB           853     0.5      90.0       58.2     0.0       0.1
     500 MiB           861     0.5      90.5      201.0     0.0       0.1
       1 GiB           495     0.3      90.8      309.7     0.1       0.2
       5 GiB           567     0.3      91.1     1460.2     0.3       0.5
      10 GiB           336     0.2      91.3     2574.1     0.5       1.0
      50 GiB        14,691     8.7     100.0   494083.5    98.9      99.8
     100 GiB            11     0.0     100.0      683.3     0.1     100.0
     500 GiB             1     0.0     100.0      173.0     0.0     100.0
   > 500 GiB             0     0.0     100.0        0.0     0.0     100.0

Disk Space:

==========  SERVER USAGE   ==========
Active Tier:
Resource           Size GiB   Used GiB   Avail GiB   Use%   Cleanable GiB
/data: pre-comp           -   245211.8           -      -               -
/data: post-comp    51484.9    31811.0     19673.9    62%            35.4
/ddvar                 78.7        7.9        66.8    11%               -

Comparing Dot Hill and EMC Auto Tiering

The problem with auto tiering in general is that a large amount of hot I/Os is “hot” only for a brief moment in time. Workloads are not uniform across an entire data set constantly and if the data isn’t moved in real time it’s very likely that hot data will be accessed from capacity storage.  Ideally a storage system would be able to react to these performance improvements in real time, but the cost in overhead to the storage processors is generally too great.  The problem is somewhat mitigated by having a large Cache (like EMC’s Fast Cache) but the ability to automatically tier data in real time would be ideal.

I recently did a comparison of Dot Hill’s auto tiering strategy to EMC’s, and found that Dot Hill takes a very unique approach to auto tiering that looks promising.  Here’s a brief comparison between the two.

EMC

EMC greatly improved FAST VP on their VNX2.   While the VNX uses 1GB data slices, the VNX2 uses more granular 256MB data slices.  This greatly improves efficiency.  EMC is also shipping MLC SSD instead of SLC SSD. This makes using SSD’s much more affordable in FAST VP pools. (Note that SLC is still required for FAST Cache).

How does the smaller data slice size improve efficiency?  As an example, assume that a 500MB contiguous range of hot data residing on a SAS tier needs to be moved to EFD. ON the VNX1, an entire 1GB slice would be moved.   If this “hot” 1GB slice was moved to a 100GB EFD drive, we would have 100GB of data sitting on the EFD drive but only 50GB of the data is hot.  This is obviously inefficient, as 50% of the data is cold.  With the VNX2’s 256MB data slice size, only 500GB would be moved, resulting in 100% efficiency for that block of data.  The VNX2 makes much more efficient use of the extreme performance and performance tiers.

EMC’s FAST VP auto tiering on the VNX1 is configured by either setting it to manual or creating a schedule.  The schedule can be set to run 24 hours a day to move data in real time, but in practice it’s not practical.  The overhead on the storage processors is simply too great and we’ve configured it to run during off peak hours.  On our busiest VNX1 we see the storage processors jump from ~50% utilization to ~75% utilization when the relocation job is running.  This may improve with the VNX2, but it’s been a problem on the CX series and the VNX1.

vnx_data_slice_explanation

DotHill

According to Dot Hill, their automatic auto tiering doesn’t look at every single IO like the VNX or VNX2, it looks for trends in how data is accessed.  Their rep told me to think of it as examining every 10th IO rather than every single IO.  The idea is to allow the array to move data in real time without overloading the storage processors.  Dot Hill also moves data in 4MB data slices (which is very efficient, as I explained earlier when discussing the VNX2), and will not move more than 80 MB in a 5 second time span (or 960MB/minute maximum) to keep the CPU load down.

So, how does the Dot Hill auto tiering actually work?  They use scoring, scanning, and sorting and each are separate processes that work in real time.  Scoring is used to maintain a current page ranking on every I/O using a process that adds less than one microsecond of overhead.  The algorithm looks at how frequently and how recently the data was accessed.  Higher scores are given to data that is more frequently and recently accessed. Scanning for the high scoring pages happens every 5 seconds. The scanning process uses less than 1.0% of the CPU. The pages with the highest scores become candidates for promotion to SSD.  Sorting is the process that actually moves or migrates the pages up or down based on their score.  As I mentioned earlier, no more than 80 MB of data is moved during any 5 second sort to minimize the overall performance impact.

Summary

I haven’t used EMC’s new VNX2 or Dot Hill’s AssuredSAN to provide any information that uses real world experience.  I think Dot Hill’s implementation looks very promising on paper, and I look forward to reading more customer experiences about it in the future.  They’ve been around a long time but they only recently started offering their products directly to customers as they’ve primarily been an OEM storage manufacturer.  As I mentioned earlier, my experience with EMC’s FAST VP on the CX series and VNX1 show that real time FAST VP consumes too many CPU cycles to be used in real time, we have always run it as an off-business hours process. That’s exactly what Dot Hill’s implementation is trying to address.  We’ve made adjustments to the FAST VP relocation schedule based on monitoring our workload.  We also use FAST Cache, which at least partially solves the problem of suddenly hot data needing extra IO.  FAST Cache and FAST VP work very well together.  Overall I’ve been happy with EMC’s implementation, but it’s good to see another company taking a different approach that could be very competitive with EMC.

You can read more about Dot Hill’s auto tiering here:

http://www.dothill.com/wp-content/uploads/2012/08/RealStorWhitePaper8.14.12.pdf

You can read more about EMC’s VNX1 FAST VP Here:  

https://www.emc.com/collateral/software/white-papers/h8058-fast-vp-unified-storage-wp.pdf

You can read more about EMC’s VNX2 FAST VP Here:

https://www.emc.com/collateral/white-papers/h12208-vnx-multicore-fast-cache-wp.pdf

Follow

Get every new post delivered to your Inbox.

Join 86 other followers