i'd like to have this as well. i've been searching myself but have not found anything.
what a gem this would be - we found major issues recently but had to telnet into the switch to get this same info.
Shouldn't these items be part of the standard interface poller collection? These are critical metrics for interfaces, and in my opinion, I'd rather have these more detailed metrics than the default all-encompassing "Discards/Error" metric that each interface gets by default.
I feel that having to create custom pollers for these counters is strangely counter-intuitive to the Orion intent - getting all the info in one place - especially when these are the first metrics I'd look at via "show int" when troubleshooting...
I totally agree, NPM is great but you have to customise so much of it yourself. For example how many companies use Dell servers? I had to spend weeks using UnDP's show attributes such as, disk raid level, disk health, temps etc very basic stuff but critical to enterprises. This is the same with Cisco/Nortel equipment, how many companies use this equipment, but again you are left to do it all yourself when you need to find out the important info that all engineers require. This all basic stuff a Network performance app should do.
I can also agree with this, but, on the other hand - Solarwinds does go alot farther than some of its competitors..
One of the main reasons we upgraded from our previous NMS (I wont mention names, but it started with a "W" and ended with a "hatsup"...) was because it offered more out of the box that I had. The previous software not only required I manually update the MIBS, but I had to create the most basic of pollers to get things like Location..
I am pulling this request out of the closet and dusting off the cob-webs.
Now that we are at NPM V11.0 and there there are all sorts of new features added, we still don't have pollers for detailed interface stats?
I am looking for a Cisco interface poller for FCS and Giants, and agree, these should be included out of the box.
Sometimes it best to add the basic functions and features first, before getting fancy with L2,L3 topology and routing.
If your interface has problems,then the other pollers will not work/do not matter.
Here are the OIDs that should have been included as default (or optional) interface pollers.
Rcv-Err 184.108.40.206.220.127.116.11.1.14 ?????
I've added the Giants counter above to a couple of Cisco6509s with Ten-gig interfaces connecting to Netapp filers.
The filers are configured with MTU size of 9000, and the switch interface is configured with 9216.
Looking at the standard NPM interface Errors & Discards counters (chart) the interface appears clean.
Looking at cieIfInGiantsErrs counter (chart) for this interface, I am seeing 5.5M errors in a 15 minute interval.
Ergo, this is why these type of counters should be available "out of the box".
"What you don't know CAN hurt you".
Thank you sir. Great input- please keep it coming!
I've gone around the loop here several times about the MTU settings, and it's important to specify where the settings are being configured and what the effect actually is on the device.
The layer-2 interfaces should probably be set to 9216, but the layer-3 interfaces to 9000. It is not obvious on much equipment what you are actually configuring when you say MTU.
a) layer-2 maximum frame size
juniper: set interfaces ge-0/0/0 mtu 9216
5406zl: jumbo max-frame-size 9216
Cisco 76xx routers:
Router(config)# interface gigabitethernet 1/2
Router(config-if)# mtu 9216
b) layer-3 maximum packet size.
juniper: set interfaces ge-0/0/0.0 mtu 1500
juniper: set interfaces ge-0/0/0.0 mtu 9000
5406zl: jumbo ip-mtu 1500
5406zl: jumbo ip-mtu 9000
Router(config)# interface vlan200
Router(config-if)# ip mtu 1500 (or 9000)
Layer-3 MTU size MUST be smaller than the layer-2 MTU; on some systems setting the layer-3 MTU implicitly sets the layer-2 MTU, and other like windows setting the MTU in the layer-2 adapter driver setting sets it implicitly for layer-3]
If a frame larger than the permitted maximum frame size is received on an ethernet interface the device (switch or router) MAY discard it.
If a frame larger than the permitted maximum frame size is to be transmitted on an ethernet interface the device MUST discard it
If an IP packet larger than the permitted maximum frame size is transmitted on an ethernet interface the router MUST fragment it UNLESS the DF bit is set, in which case it discards the packet and sends an ICMP Fragment required notice.
there is all sorts of subtle breakage possible because network devices are not simply switches OR routers now, they are some kind of hybrid.
I have verified that this counter - cieIfInGiantsErrs is not really an error in my case.
This displays the same as data as:
"show counters interface <interface>"
"show interface <interface> counters errors"
What this counter shows is any octets exceeding 1518 bytes.
Since we have the MTU set to 9216, any jumbo frame will increment this counter, even if they are valid frames below the MTU size set on the interface.
Back to the thread title, this is still worthwhile monitoring for any interfaces with the default MTU set (1518) to highlight potential problems of oversize frames being received.