2 of 2 people found this helpful
I think you are misunderstanding what a 99th percentile traffic report is intended to do. It takes all the datapoints, cuts out the top 1% of them to allow you to ignore short burst outliers and then averages whatever is left. The concept mostly comes into play when dealing with network providers as their SLA's often disregard specified amounts of bursty traffic.
As far as identifying links that have throughput issues, you have to have a very good idea in your head of what that means to you in order to generate a report on it. At the very least you might have look into the avg and max bandwidth utilization figures for your interfaces over the time interval, and only include interfaces where the max was at least 75% of the configured bandwidth, since any interface that didn't even get over 75% would pretty much by definition not be experiencing congestion.
You can get a LOT more elaborate than that, but a basic avg/max is easy to knock out. Once you get those then questions start popping up like how often an interface was over it's thresholds, and for how long, and during what times of day. Getting to those details tends to be a bit more complicated.