This discussion has been locked. The information referenced herein may be inaccurate due to age, software updates, or external references.
You can no longer post new replies to this discussion. If you have a similar question you can start a new discussion in this forum.

Large drives reporting incorrectly

I have large drives that are reporting incorrectly, are there any suggestions on how best to handle this?

I would be interested in seeing the exact formula that is used to calculate disk space including the data sources that the data is collected from.  In my specific case this problem is on Linux systems.

I am pretty sure that there are other threads on this but I was unable to find any that seemed constructive and specifically on just this topic (though it's possible I just missed them).

Any suggestions or help on this would be much appreciated.

Below is a screen shot so you can see what I am referring to...

  • Hi Byron,

    I have seen that Net-SNMP does have some issues handling TB readings.

    You can try running snmpwalk against the server itself, and look out for the HOST-RESOURCES-MIB. You will notice that the /backup drive that is giving issues are reporting negative values.

    I do not have a TB harddisk in my test environment, but I would suggest giving the latest Net-SNMP 5.5 a try and hope that it will work out fine.

    Link: http://www.net-snmp.org/download.html

    Or if there are others that have similar issues, and are already on the latest Net-SNMP... I guess the Net-SNMP guys will have to give a fix to it to rectify the overflow.

    HTH.

  • Hi Byron,

    I stole this from a previous post that I have made:

    The following is how we get the values that you are seeing:
    size = hrStorageSize x hrStorageAllocationUnits
    used space = hrStorageUsed x hrStorageAllocationUnits

    OIDs:
    hrStorageAllocationUnits = 1.3.6.1.2.1.25.2.3.1.4
    hrStorageSize = 1.3.6.1.2.1.25.2.3.1.5
    hrStorageUsed = 1.3.6.1.2.1.25.2.3.1.6

  • I did do an snmpwalk against the drive OID's in the HOST-RESOURCE-MIB and the values that I got were the same ones that Orion is reporting so this does confirm a problem with Net-SNMP (as I was pretty sure it was already) however; the values it's reporting are not negative values.

    When I get a chance I will have my Linux admin upgrade the system to Net-SNMP version 5.5 and see if that helps.  I know that there was a bug reported against this Net-SNMP problem a while back and unfortunately I don't see any mention of fixing it in the release notes.

  • Also, I might suggest that since SolarWinds relies on Net-SNMP for Linux monitoring, perhaps SolarWinds should try and put some pressure on the Net-SNMP group to resolve that issue.... or even provide some development cycles to the project to help get it resolved (I am pretty sure it's a community supported project).

  • (I am pretty sure it's a community supported project).

    I 100% support this project. (Why do we have to view a wrong information's in the first place?) This should be addressed accordingly (using UnDP is just a workaround)

  • I am also facing the same problem, Did you check if this is solved in netsnmp 5.5 ???

    Orion uses hrStorageSize to pull the size information and multiplies that by hrStorageallocationUnit,

    Since hrStorageSize is integer32 , it is going to have limit to represent larger volume size depending upon the allocation unit size which probably is the block size.

    In my case allocation Unit is 4096 Byte, In this case it can represent max around 16TB, not more than that,

    I have 532TB volume and that is shown as 3 TB in Orion and usage is shown as 13 TB :)

     

    Orion needs to provide some work around for this else it is going to be useless for us,

     

    ///Mahendra

  • Did anyone find a solution to this problem?  I have the same problem on two 17TB partitions.

  • As per RFC Data size is 32 bit integer so definitely netsnmp can't represent large number and does truncate to accomodate in 32 bit which results in wrong value.

    I think of 2 possible solutions -

    1. Orion to make volume/storage oid configurable so that we can implement custom agent to pull large volume/all volumes and Orion does't need to change any thing and user can see all the volumes in same resource as they see today

    OR

    2. NetSNMP doesn't follow the size defined in RFC and use 64 bit instead to represent storage. I am not sure if Orion cares about the size defined in RFC mib,

    Isn't there any 64 bit version of standrad RFC mibs?

    We are content provider company and have partition as big as 500TB and it is critical for us to monitor these disk partitions. I wrote multiple emails to Orion but there is no clear answer for they are keeping quiet on this.

  • I am running net-snmp version 5.5 with volumes exceeding 100TB in size.  This is definitely an issue still.

  • Is there a solution to the issue?  A 27TB volume is showing up as 8.6TB and that it is 150% full.