Is there a field anywhere for overall Cluster Datastore Capacity?

I'm trying to use linear bar charts in a widget to display overall capacity/used/free but cannot find the right object to query for it.  I can find the DataStoreUsedSpace in Orion.vim.clusters but no corresponding datastore size. Weirdly it has that for CPU and MEM but nothing for datastore cap.

Any ideas?

Parents
  • You can use the navigational property to get to the current data store statistics.

    SELECT [vClusters].ClusterID
          , [vClusters].CortexID
          , [vClusters].ManagedObjectID
          , [vClusters].DataCenterID
          , [vClusters].Name
          , [vClusters].TotalMemory
          , [vClusters].TotalCpu
          , [vClusters].CpuCoreCount
          , [vClusters].CpuThreadCount
          , [vClusters].EffectiveCpu
          , [vClusters].EffectiveMemory
          , [vClusters].DrsBehaviour
          , [vClusters].DrsStatus
          , [vClusters].DrsVmotionRate
          , [vClusters].HaAdmissionControlStatus
          , [vClusters].HaStatus
          , [vClusters].HaFailoverLevel
          , [vClusters].ConfigStatus
          , [vClusters].OverallStatus
          , [vClusters].CPULoad
          , [vClusters].CPUUsageMHz
          , [vClusters].MemoryUsage
          , [vClusters].MemoryUsageMB
          , [vClusters].TriggeredAlarmDescription
          , [vClusters].OrionIdPrefix
          , [vClusters].OrionIdColumn
          , [vClusters].PlatformID
          , [vClusters].VmCapacityCount
          , [vClusters].VmCapacityConstraint
          , [vClusters].DiskUtilizationDepletionDate
          , [vClusters].DatastoreUsedSpace
          , [vClusters].CpuUtilizationDepletionDate
          , [vClusters].MemoryUtilizationDepletionDate
          , [vClusters].DateCreated
          , [vClusters].DetailsUrl
          , [vClusters].PollingSource
          , [vClusters].VsanEnabled
          , [vClusters].VsanUuid
          , [vClusters].HostStatus
          , [vClusters].DataStores.Capacity
    --                     ^          ^
    --                     |          |
    --                     +- Navigation Property Name (Orion.VIM.Datastores)
    --                                |     
    --                                +- Property Name within Orion.VIM.Datastores
          , [vClusters].DataStores.FreeSpace
          , [vClusters].DataStores.SpaceUtilization
    FROM Orion.VIM.Clusters AS [vClusters]
    

    P.S. - you probably don't need all those fields, but I just went from the default and added.

  • Do have a "new" problem and that's trying to figure out what to divide by to get a number that matches vcenter's cpu/mem/storage widget within vcenter itself.

    Actual capacity should read 5.6TB but no matter what I get a diff number that doesn't match.

    Any ideas?

    SELECT [ClusterID]
          , [vClusters].Name
          , MAX([vClusters].DataStores.Capacity) AS [DiskCap]
          , Max([vClusters].DataStores.FreeSpace) AS [FreeCap]
          , Max([vClusters].DataStores.SpaceUtilization) AS [SpaceUsed]
    FROM Orion.VIM.Clusters AS [vClusters]
    Group by [ClusterID],Name
    

  • Can you give me the output of what you are seeing here?  My guess is that the data is stored in bits or bytes, so you'll have to do a bunch of /1024 to get it to TB.

  • Output for one row

    raw:

    ClusterID Name DiskCap FreeCap Space used %
    16 BDR Cluster 3298266447872 2173120282624 40.65209

    with division to GB using /1073741824

    ClusterID Name DiskCap FreeCap Space used %
    16 BDR Cluster 3071 2023 40.65209

    Vcenter itself reports 5.6TB volume total capacity. 

    Tried /1000000000 too

  • Nope - this is "proper" bytes.  I need to jump off on a soapbox for a hot moment and we'll get into some math things, so please allow me to explain.

    • 1000 (10^3) bytes = 1 kb (decimal conversion / lowercase kb) <-- 'wong'
    • 1024 (2^10) bytes = 1 KB (binary conversion / uppercase KB) <-- 'correct'

    SolarWinds stores all (or nearly all) space values as bytes and lets the web interface do the conversion (there's even an conversion thing built into the web-based report writer for this scenario).  We do this for a number of reasons, but the big one is so that you have an apples-to-apples comparison between vendors.

    • 1 TB = 1024 GB
    • 1 GB = 1024 MB
    • 1 MB = 1024 KB
    • 1 KB = 1024 bytes

    So, you'll need to divide your storage numbers by 1099511627776 (= 1024 * 1024 * 1024 * 1024)

    This is something we see often, so I had the framework of the query handy.

    SELECT  [vClusters].Name
          , [vClusters].DetailsUrl
    ----------------------------------------------------------------
    --                        Bytes Math          
    --     1024 * 1024 * 1024 * 1024 = 1099511627776 bytes = 1 TB
    --     1024 * 1024 * 1024        = 1073741824 bytes    = 1 GB
    --     1024 * 1024               = 1048576 bytes       = 1 MB
    --     1024 bytes                                      = 1 KB
    ----------------------------------------------------------------
          , CASE
              WHEN SUM( 1.0 * [vClusters].DataStores.Capacity) > 1099511627776.0 THEN ROUND(SUM( 1.0 * [vClusters].DataStores.Capacity) / 1099511627776, 1)
              WHEN SUM( 1.0 * [vClusters].DataStores.Capacity) > 1073741824.0 THEN ROUND(SUM( 1.0 * [vClusters].DataStores.Capacity) / 1073741824.0, 1)
              WHEN SUM( 1.0 * [vClusters].DataStores.Capacity) > 1048576.0 THEN ROUND(SUM( 1.0 * [vClusters].DataStores.Capacity) / 1048576.0, 1)
              WHEN SUM( 1.0 * [vClusters].DataStores.Capacity) > 1024.0 THEN ROUND(SUM( 1.0 * [vClusters].DataStores.Capacity) / 1024.0, 1)
              ELSE SUM( 1.0 * [vClusters].DataStores.Capacity)
            END AS [Capacity]
          , CASE
              WHEN SUM( 1.0 * [vClusters].DataStores.Capacity) > 1099511627776.0 THEN 'TB'
              WHEN SUM( 1.0 * [vClusters].DataStores.Capacity) > 1073741824.0 THEN 'GB'
              WHEN SUM( 1.0 * [vClusters].DataStores.Capacity) > 1048576.0 THEN 'MB'
              WHEN SUM( 1.0 * [vClusters].DataStores.Capacity) > 1024.0 THEN 'KB'
              ELSE 'Bytes'
            END AS [CapacityUnits]
    `      , CASE
              WHEN SUM( 1.0 * [vClusters].DataStores.FreeSpace) > 1099511627776.0 THEN ROUND(SUM( 1.0 * [vClusters].DataStores.FreeSpace) / 1099511627776, 1)
              WHEN SUM( 1.0 * [vClusters].DataStores.FreeSpace) > 1073741824.0 THEN ROUND(SUM( 1.0 * [vClusters].DataStores.FreeSpace) / 1073741824.0, 1)
              WHEN SUM( 1.0 * [vClusters].DataStores.FreeSpace) > 1048576.0 THEN ROUND(SUM( 1.0 * [vClusters].DataStores.FreeSpace) / 1048576.0, 1)
              WHEN SUM( 1.0 * [vClusters].DataStores.FreeSpace) > 1024.0 THEN ROUND(SUM( 1.0 * [vClusters].DataStores.FreeSpace) / 1024.0, 1)
              ELSE SUM( 1.0 * [vClusters].DataStores.FreeSpace)
            END AS [FreeSpace]
          , CASE
              WHEN SUM( 1.0 * [vClusters].DataStores.FreeSpace) > 1099511627776.0 THEN 'TB'
              WHEN SUM( 1.0 * [vClusters].DataStores.FreeSpace) > 1073741824.0 THEN 'GB'
              WHEN SUM( 1.0 * [vClusters].DataStores.FreeSpace) > 1048576.0 THEN 'MB'
              WHEN SUM( 1.0 * [vClusters].DataStores.FreeSpace) > 1024.0 THEN 'KB'
              ELSE 'Bytes'
            END AS [FreeSpaceUnits]
          , ROUND(100.0 * ( ( SUM( 1.0 * [vClusters].DataStores.FreeSpace) / SUM( [vClusters].DataStores.Capacity) ) ), 1) AS [Percent Free]
          , ROUND(100.0 * ( 1.0 - ( SUM( 1.0 * [vClusters].DataStores.FreeSpace) / SUM( [vClusters].DataStores.Capacity) ) ), 1) AS [Percent Used]
          , ROUND(AVG([vClusters].DataStores.IOPSRead), 1) AS [AvgReadIops]
          , ROUND(AVG([vClusters].DataStores.IOPSWrite), 1) AS [AvgWriteIops]
          , ROUND(AVG([vClusters].DataStores.IOPSTotal), 1) AS [AvgTotalIops]
          , ROUND(AVG([vClusters].DataStores.LatencyRead), 1) AS [AvgReadLatency]
          , ROUND(AVG([vClusters].DataStores.LatencyWrite), 1) AS [AvgWriteLatency]
          , ROUND(AVG([vClusters].DataStores.LatencyTotal), 1) AS [AvgTotalLatency]
    
    FROM Orion.VIM.Clusters AS [vClusters]
    WHERE [vClusters].Hosts.VMwareProductName LIKE 'VMware%'
    GROUP BY [vClusters].Name, [vClusters].DetailsUrl
    
    

    You'll also notice that I'm multiplying a few things by 1.0.  This is intentional.  Bytes don't have a fractional unit (there are no decimal bytes), so the data is stored as an integer.  In SWQL Queries, we cannot CAST or CONVERT values, but if you multiply an integer by 1.0, you get the same numeric value, but it converts it to a decimal data type.  If we skipped this there would be some catastrophic rounding taking place and your data wouldn't match up.

    Side Note: I have a tendency to overdo the multiply by 1.0 thing frequently, but it doesn't hurt the processing time, so I'm loathe to remove them if it works.

  • Seems to have almost worked. Couple of capacities are doubled and one is quadrupled from where it should be (cluster reports 9TB in vcenter, orion reports 32836.5 TB)

    Debugging stuff like this is such a time suck when I'm not remotely a dev or devops but it's information management expect orion to output. 

    It seems that it's Bytes for the most part. Some of VIM does use Kb because vmware api outputs as kb minimum however there is zero logic within the modern dashboard to have the web interface adjust sadly. Nor in the performance analyzer modern widget. I've got tickets open for specific issue on the latter that were closed as "devs aware, functioning as expected" even though my gigabit perf graph shows bps only.

  • Sounds like adding a byte to smart sizing conversion needs to be made into a feature request for the Orion Platform regarding Modern Dashboards so people can vote on it.

    If this request was given to me, I would have just stuck with a % of used or free and put that in a KPI widget and then provided a table widget with all the attached disks.  But that's me and probably not what your team wants/needs.

    What platform version are you running?  I feel like the "smart sizing conversion" was added to an update to the PerfStack Analysis tool.  I might be mistaken, but given that there are several thousands of possible data points, I can see that a few might have been missed.

  • We're installing the latest hot fix tomorrow but otherwise across the board current. There's been a couple of missed spots in perf stack but it does seem as though each patch closes those up. I'm still absolutely stymied as to why the legacy manage entities tools STILL haven't been added to the manage nodes page yet. 

    Then again code base is ever leaner of legacy code so we'll get there. Definitely adding the smart sizing conversion to modern dashes feature requests as soon as my list of work stops growing. 

    Biggest pain of making a few good dashboards is management always wants "one more view". :D

    Thanks immensely for the help on this. I had no idea you could get that complicated with incremental stepped calculations within swql. 

  • The good thing about PerfStacks and Modern Dashboards is that end users (not just admins) can create them themselves. :-) 

Reply Children
No Data