Would you be comfortable posting the output of "show int gi0/0/0". What kinds of stats are on that port, paying particular attention to dropped packets / discards?
If your NPM is monitoring that port's bandwidth, it might be revealing to see your screen shot of the min/max/average bits per second of that interface for the last seven days.
Here is the output of "show int gi0/0/0"
ROUTER1# show int gi0/0/0
GigabitEthernet0/0/0 is up, line protocol is up
Hardware is SPA-5X1GE-V2, address is XXXXXXXXXX
Description: HO North Circuit
MTU 1500 bytes, BW 200000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 29/255, rxload 170/255
Encapsulation 802.1Q Virtual LAN, Vlan ID 1., loopback not set
Keepalive not supported
Full Duplex, 1000Mbps, link type is force-up, media type is LX
output flow-control is on, input flow-control is on
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:11:33, output 00:02:59, output hang never
Last clearing of "show interface" counters never
Input queue: 0/375/1663/40 (size/max/drops/flushes); Total output drops: 40739 43
Queueing strategy: Class-based queueing
Output queue: 0/40 (size/max)
30 second input rate 134088000 bits/sec, 15840 packets/sec
30 second output rate 23343000 bits/sec, 10070 packets/sec
361538482089 packets input, 365672263343050 bytes, 0 no buffer
Received 53242 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
247668469684 packets output, 83240338066273 bytes, 0 underruns
0 output errors, 0 collisions, 2 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
<< Could it be your NTA traffic is being dropped or significantly delayed due to the pipe being filled for much of time?
Yes, but if that was the case, I would not expect 2 of the 3 interfaces to always show current with one being hours -- or even days -- behind. I would expect netstat data for all 3 interfaces to be interrupted, not only one of the 3 NICs.
Thanks for all your posts and help! Much appreciated.
It IS an interesting situation.
The link is here: Troubleshooting Input Queue Drops and Output Queue Drops - Cisco I was unable to get Thwack to accept my previous response, so I made a screen shot & pasted it in. But that eliminates the ability for you to click on the link.
What is the relationship of the three links to each other? Do they share communications to the same destination, such as in a port-channel?
A simple diagram of their connectivity might help explain the differences in NetFlow performance here.
Thanks for the link. I will take a closer look at those metrics,
If I had posted trouble with only one link, I would expect questions such as:
a) is the router's config sending netflow in the first place?
b) is a firewall blocking transmission from source to SolarWinds?
Purpose of including the other two links is they are defined on the same router, in the same config. And the other two links' netflow data is current in SolarWinds. The only reason I included the other two was to establish "netflow connectivity" from the device to SolarWinds Netflow receiver.
Looking this morning, I see the same problem exists; gi0/0/0 is not current and the other two are, but netflows have been received and processed by SolarWinds for gi0/0/0 since my orig. post.
LastTime Node Interface 2016-09-28 07:45:00 ROUTER1 GigabitEthernet0/0/1 2016-09-28 07:45:00 ROUTER1 GigabitEthernet0/0/0.2 2016-09-27 11:45:00 ROUTER1 GigabitEthernet0/0/0
Furthermore, I understand the following may create holes in netflows sent/received:
1) link saturation on flow exporter's side; netflows never sent when too busy
2) link saturation on SolarWinds' Netflow receiver side; netflows made it to receiver's IP stack but were dropped b/c too many to process (according to SolarWinds support, their netflow receivers are only rated to receive and process 50,000 PDUs per second).
BUT I would expect which packets were dropped -- either in case #1 or #2 above -- to be random. I have not done the math, but the odds that link saturation would consistently affect one link for 24+ consecutive hours, but not the other two seem very improbable to me.
I think your questions are good ones. If I was looking at that issue I'd open up two tickets: One with Solarwinds Support and the other with Cisco TAC. You'll probably spend less time spinning your wheels that way, although SW Support can take several days to get back to you if you're not experiencing a major product failure.
I have an open ticket already with SolarWinds, but last couple days we've made little-to-no progress on cause or fix. So I was hoping Thwack may be able to shed some light.
Also, according to the "show ip flow export" command (see below), 4,614 flows failed due to lack of export packet. This may be [part of] the problem. Thanks again for all the help!
ROUTER1# show ip flow export
Flow export v5 is enabled for main cache
Export source and destination details :
VRF ID : Default
Destination(1) 10.11.x.y (9995)
Destination(2) 10.11.x.z (9995)
Version 5 flow records
188197816521 flows exported in 6633307276 udp datagrams
4614 flows failed due to lack of export packet
0 export packets were sent up to process level
0 export packets were dropped due to no fib
0 export packets were dropped due to adjacency issues
0 export packets were dropped due to fragmentation failures
0 export packets were dropped due to encapsulation fixup failures
0 export packets were dropped enqueuing for the RP
0 export packets were dropped due to IPC rate limiting
0 export packets were dropped due to Card not being able to export
When you discover and correct the problem, please post your solution. I think I'm not the only one interested in this.
>> please post your solution,
Here's an update.
1. Cisco TAC advised we could filter and/or sample flows to reduce export volume, but current export not problematic;
2. We are receiving more flows per second (50k) than the receiver supports;
Considering the behavior of the product when max fps is exceeded is undefined, further analysis for this single instance of missing netflows is unnecessary.
Thanks again for the help!