I've got no problem with traffic routing through different or multiple links as these are links that I've configured to do so, thus I know where to look and what to look for when there's an issue. The specific path at a given time can vary, but it should only vary within constraints that I've already configured and expect.
Prioritizing flows between endpoints sounds great: that's what the QoS is configured to do. Specifically it's great as long as it's my config and not something I have to relearn when I'm troubleshooting an emergency with multiple layers of managers looking over my shoulder.
Since we are talking ideas and concepts (more of a perfect world thing), I think technology is here that can do this today. Problem is, it's in pieces and isn't build specifically for QoS.
Piece #1: In-line "device" (appliance, router, or switch), watching the traffic real-time...this is a MUST.
Cisco has the new AVC (Application Visibility and Control) for it's ISRv2 routers (Cisco Application Visibility and Control (AVC) [Routers] - Cisco Systems).
BlueCoat purchased Packeteer a few years ago, I would consider this the original AVC.
Piece #2: IPS, not IDS, functionality...the ability to block traffic based on anomaly detection.
This would need to be modified for QoS behavior; however, does anyone remember Cisco MARS...that anomaly detector didn't go very well.
Piece #3: Behavior-based, dynamic routing
This can be done today with Policy-based routing (PBR) and Performance-based routing (PfR). You can pick-n-choose your patch based on metrics.
Now, I'm not a developer...so if someone takes my idea to combine these into 1 device or code release, and can provide dynamic QoS for a network, well, all I can say is...I want 10% commission for every license sold!
And this doesn't even get into the IPv6 enhancements around native QoS or path/route discovery.
So, as I stated in the other post...it's coming, just a matter of when and how.
dunno... I reckon I would live with the decisions made by my Engineers... I am just a peon around here...
If my queuing/QoS for a link was filling and about to drop packets it could be an automated policy to route over a non-favorable metric path. I'd rather hop data around then drop it unless it really was given a drop policy.
Yes, maybe using non-optimal routes instead of dropping would be a good idea, I guess I would be in for that.
When you phrase the question like that i would be happy with of these
- Temporarily redirecting traffic that’s unknown to a security device for analysis. - Very Useful on some areas of the network not so much on other
- Traffic steering, where traffic is balanced across links based on a combination of traffic characteristics and link utilization. - A no brainer for me why have the multiple links if you cant use them
- Prioritizing specific traffic flows between two endpoints to minimize latency. - Automating this would work given the parameters could be defined at the start.
- Temporarily redirecting traffic that’s unknown to a security device for analysis. Bit concerned about this one, as although useful could cause a lot of work to administer, but given the right resource would be good.
- Traffic steering, where traffic is balanced across links based on a combination of traffic characteristics and link utilization. Yep, definitely!
- Prioritizing specific traffic flows between two endpoints to minimize latency. Yep, definitely!
Yes, yes, and yes.
The temp redirection piece would only be problematic if latency to analyze bit into overall performance - but an evolving dictionary that could identify and remember legit business flows would be a solver. An intuitive method to pre-seed one's dictionary would be useful...like watch the network for a day or business cycle, identify and submit to said dictionary legit flows, and go from there on an ongoing basis. Some admin, yes - but there's always some admin even in the most automated environments.
- Temporarily redirecting traffic that’s unknown to a security device for analysis. I think this is a risky proposition, I don't want to affect a user because for whatever reason his traffic looks a bit different without digging into it for more detail.
- Traffic steering, where traffic is balanced across links based on a combination of traffic characteristics and link utilization. Yup.
- Prioritizing specific traffic flows between two endpoints to minimize latency. Absolutely
Nice topic. This seems to have been a common discussion that led to Cisco providing add-ons to AVC like the iWAN product push. And Riverbed has added something similar with Path Selection. And of course, some folks definition of SDN.
As we see these offerings mature, I think we'll get closer to the wants that several folks have listed here. My only apprehension is the ability to select who gets the preferential/reactive treatment. For service providers or larger enterprises that have central services teams that act like service providers, it would be nice to protect customers from harming each other or even harming themselves. It certainly feels like a 'Application Aware + Business-
Aware' flavor of QoS.
Give me the options and make sure I have several nerd knobs to make sure rogue application owners don't go hog-wild on the infrastructure.
Thanks for the post,
Absolutely! Probably my biggest fear as things are marketed without maturing, is someone other than a trained network professional thinking they are qualified to make network decisions, simply because it's all "automated" and at their finger tips.
All changes have to come with a notification; and a specific watch of the affected nodes incorporated with the change.
- Temporarily redirecting traffic that’s unknown to a security device for analysis - This may be an issue as we have numerous devices with cheap network electronics and they may act or generate traffic that looks questionable, even malicious at time.
- Traffic steering, where traffic is balanced across links based on a combination of traffic characteristics and link utilization - Could be good to remedy bottlenecks; but of course these shouldn't occur often enough to create a flapping experience; cause any flapping post 1930's is just wrong.
- Prioritizing specific traffic flows between two endpoints to minimize latency - Would require a very specific setup; and for those with hundreds of device types; where does the priority end? I would also be curious how one would report or alert on the affected traffic; and how another end user is affected when high priority traffic is passed on; possibly queuing, or rerouting(traffic steering) another traffic type.
* These stack on top of each other and consideration would have to be given so that one change does not cause another automated change to take place, and so on and so on... to the extent that you getting alerts on all these changes.... when an extra 2 seconds of wait time could have been endured by 1 user to stop the changes and possible affects to all the other users when some engineer's crazy rules kick in. Just Sayin, nobody is perfect and someone has to program this.
I'm not sure the benefits would outweigh the potential for some funky unintended consequence, which would then be hard to track down since it was an automated change. It would really depend on the specific situation I was dealing with.
I would say, NCM would have to do a config DL both before and after the change to make sure the Auto Change does show in the config changes reports...that would eliminate the tracking down of crazy changes. Also there must be a way to limit or cap your changes. One change creates an issue down the line, that causes another change and so on and so on and Scooby Dooby do on!
I don't think that I wouldn't trust such a technology, I would just want to test it and see it proven before I would be willing to put it into a production environment. I think the possibilities for something like this are really exciting and I look forward to seeing how it ends up working out. I see a lot of good potentials on the security front for something like this.
The long and the short of it is that i pretty much see this coming hand in hand with SDN...once you're at that point where SDN is mature and deployed in lots of places you theoretically have pretty much everything you need to build something like this. However, that's a very general statement, and more to the point, I probably wouldn't trust it to make those decisions today. It seems a little bit like an application firewall in that you can set it there in a monitoring mode and be able to look at the data it's generating, but do you ever really go all in and turn on blocking? It probably also would depend on the size of your network and the resources you have available to administer/troubleshoot, but with where I am right now I don't think i would just (even in theory) a system to make the right decision all the time and not break down constantly.