NNSquad - Network Neutrality Squad
NNSquad Home Page
[ NNSquad ] Re: INTELLIGENT network management? (far from IP)
Ok Bob. Let me re-post my QoS rant. First the main problem. QoS really isn't needed when you have big pipes. The Internet has plenty of capacity and most applications don't really need huge amounts of bandwidth to work well. Go to http://www.networkworld.com/news/2007/021507-dont-expect-video.html and read the 5th paragraph. It begins with "In the long haul....". So what is the average utilization of these big pipes. ...single digits???, and for lots of good reasons. I used to professionally work on this stuff in another life and previously posted what I remember about it below... I think most, if not all of it, still applies. The main problem I see is with the term itself: "QoS" is NOT about Quality and it is NOT about Service. It is about billing! I think the entertaining satire at: http://ss7.net/ss7-blog/2006/05/16/mr-bandwidth-qos/ that mocks QoS, has figured this out. Looking at it in a billing context is the only one that makes any sense IMHO. Technically it doesn't really work. Even the Internet crowd has figured this out https://www.educause.edu/ir/library/pdf/CSD4577.pdf and they seem to be saying if you don't have scarcity, what good is it or it's more trouble than it is worth!
I am not a big fan (or even a believer) of QoS. I do admit it is great theory but there is too much experimentation to show any practical advantage over just having the capacity. The practical issues include, ROI, cost, complexity, identifying the traffic flows, application of policy, federation across networks (end to end) etc. etc. etc. With that said, I can still see simple prioritization mechanisms being useful at the edge of the network, if they are under control of the end user who is really the final arbitrator of what he thinks is important to him on his bottleneck access link. Of course these prioritization mechanisms are attractive nuisances for billing purposes too ...it's a slippery slope. But the industry shouldn't make end to end promises they can't keep (there are huge QoS issues in federation, security ..etc.). In my experience people running networks don't want to deal with QoS. Here is list of what I experienced/remember...
1) You can't find the people to manage QoS, and when/who you do hire are too expensive. Consider technology employee burden rates per individual of well over 150k per year and they complain about office space, parking, lighting, heating, window views, health care etc., ...and bandwidth with ever declining prices looks better and better and better - and bandwidth never complains :-). Even if you do hire someone with real practical skills (and that's another question "finding them") you won't let them touch the production network (aren't the majority of network outages from people touching the network?). These expensive people just wind up looking at utilization reports and ordering bandwidth. I built a tool for IBM to supply information about packet flows useful for QoS and Policy applications - so I got to see first hand how it was used/applied, rather NOT used/applied).
2) Networks designs that require any level of reliability have fail over designs/mechanisms which mean everything in the network must run at reasonably low utilization and be able to take on additional load if something else fails (or multiple things fail - a hub box). If you don't do this, you lose your job when the whole network goes down because of a single link or box failure cascades (I have seen it happen).
3) Network capacity must be provisioned and sufficient for peak loading failures (even if the peaks occur once a year). This statement and the previous network design statement means networks are routinely run at very very low capacity.
4) IBM did a study about boxes running at high utilizations a number of years ago and correlated the fact that the higher the utilization the more likely it will fail (it is less reliable at high utilization) because of problems you don't see at lower utilizations, like multiple buffer pools being filled at the same time confusing the task dispatcher, strange race conditions and control block and data, and buffer threads become high latency paths etc. etc.
5) What does it mean to run a link/box at 10% or less (typical Internet link), 90% of the time there is no queue and at least 90% of the time you don't need QoS and in reality the numbers are worse. what is needed is a queue depth of at least three packets (one in transmission and two waiting) and even this doesn't occur very often with high speed link utilizations of less than 10%
is in a race with
7) Most of the time the SERVERS ARE SLOW and NOT the network, this makes the problem, a task dispatching/resource one at the servers, assuming they have multiple type of tasks, if not it is a simple server capacity planning problem. Is QoS tied back from the end user through the network, or rather multiple networks, all the way back to the server and even database administrator? Of course not, what real QoS guarantee can you offer. So, I can't see how QoS is enforceable across a collection of heterogeneous networks. It seems to be embraced by the Telco's as one possible way to manage (or stop investing) in capacity and therefore it basically encourages artificial scarcity so a need for QoS is artificially created, and thus it is all about billing.
8) All code has bugs per Kloc, With QoS code you are adding bugs and unreliably to your network.
9) In general, QoS adds complexity and all the problems that come with complexity, this is a VERY huge slam against QoS ...re-read all the above :-)
10) I believe it has a negative ROI (it's funny you can never find any numbers showing real tangible benefit, don't you think if it was so wonderful or even mildly useful some one "with QoS to sell" would put out some numbers, even questionable marketing ones... ;-) But when you think about the marketing/savings angle and if you really wanted to save money you might be tempted to ignore QoS and all it's trappings and just manage by the bandwidth (of course you need to do capacity planning and traffic measurements - but these can be done simply and cheaply and I have done them with clients with simple utilization reports with thresholds, not elegant but it worked every time).
11) No one can really understand or predict what it will do in a production network (sure you can look at individual specifics, but not over the entire network) etc, you should get my drift by now... In my experience QoS is really bogus stuff.
12) QoS just encourages end users to use packet-obfuscation techniques/technology as the network provider tries to inspect/control their traffic. QoS simply creates a major incentive to hide one type of data (e.g., an MP3 file) as another type of packet (e.g., VoIP). Finally, there's the completely unanswered question of what to do with encrypted data -- do you de-prioritize all encrypted traffic? If not, everyone will encrypt everything; if you do, then you've introduced a communications medium where privacy is systematically discriminated against.
13) Who you gonna call when it is broken, ...and even to try to understand IF it is broken. Problem Determination is a challenge to say the least...
14) And Finally, let’s consider the future that is emerging! How can ANY technology like QoS which relies on extensive core network control and takes an application focus, adapt to overlay techniques found in P2P networks as well as trends (such as mash-ups) related to dynamically composed and instantiated concoctions (formally known as applications) at the edge of the network?
I view QoS like folk lore and Bigfoot. People talk about it but it really doesn't exist in practice. Consider one trapping of QoS called queuing theory. Many QoS advocates try the complex queuing theory math as one way to make their point for the need of QoS through mathematical proofs. I turned negative after studying Queuing theory and attempting to apply it to real networks. I could never ever find any Poisson's or exponentials (traffic patterns or work quantum per packet or otherwise) in any network data, and I looked at thousands of network traces (no exaggeration). Network traffic is all deterministic and the self-similar nature of networks is at odds with any queuing theory approach.
Again, maybe I was just unlucky and had a lot of bad experiences
From: Bob Frankston [mailto:Bob19firstname.lastname@example.org]
Sent: Sunday, March 02, 2008 10:37 PM
To: 'Fred Reimer'; 'Kevin McArthur'
Cc: email@example.com; Waclawsky John-A52165
Subject: RE: [ NNSquad ] Re: INTELLIGENT network management? (far from IP)
A very short response as I’ve beaten this issue to the ground many times before. If you know enough to define Q and S then you have an intelligent network that knows the meaning of the bits. Useful but not the Internet
The question is not whether there are circumstances in which you want to apply intelligence – it’s a question of whether that’s the Internet. As per (my interpretation) David Reed’s talk, the protocols are indifferent to the intentions of the bits and responds to congestion without favoritism.
I’ve cc’ed John who can provide references to referred papers on (failed) attempts to do QoS on Internet2 and elsewhere.
[mailto:firstname.lastname@example.org] On Behalf Of
My specialty is not voice, but I do know QoS and the company I work for is one of the top technically as far as VoIP capabilities. However, I am not a spokesperson for my company and anything I contribute to NNSquad is my personal opinion and does not reflect the opinion of my company.
With respect to what you say about QoS, it is true only if you assume certain things about the network. First and foremost is the bandwidth (link speed) available end-to-end. Often the largest component of the delay between two points is the serialization delay on a particular link. For the typical enterprise network, this has historically been WAN links. Now even this is changing, with the availability of affordable high-speed links such as NLMI and Metro-E. However, at a certain point, slow link speed can cause major problems when the speed falls below a certain level and there is data traffic that is composed of large sized packets. The serialization delay at low-speed links can cause a significant delay if the voice packet is not queued first (low latency queuing) and technologies such as link fragmentation and interleaving (LFI) is not deployed. The speed of broadband Internet connectivity does not have this problem, so what are we left with?
What we are left with is packet loss and jitter. Jitter should not be a problem if you don’t have a queuing or link speed problem. If there is a link speed problem then there could be significant jitter, as packets take different amounts of time to traverse the network and can cause problems if the jitter buffer of the end-stations are exceeded (causing end-station packet drops, as the packet is no longer useful from a voice perspective as far as using the data contained in the packet to recreate the voice stream). So, we are down to packet loss.
Packet loss is the primary concern. And it is only a concern if any links between end-stations are so congested that packets are dropped. The major problem we are discussing here is network neutrality and the behavior of ISP’s in attempting to reduce the upstream traffic (primarily) due to its relative scarcity. This may or may not be true depending on the last-mile connectivity, as wireless is different from FiOS which is different than cable. However, let’s go with cable as an example. Cable has much greater downstream bandwidth as compared to upstream bandwidth. P2P technologies actually use upstream bandwidth in a much more “even” manner than tradition protocols used on the Internet. Traditionally, Internet communications has been heavy on the download side and slim on the upload side.
So, we are left with certain circumstances, or possible scenarios, in which the upstream bandwidth is congested, and packets will necessarily be dropped. That may not actually be accurate at this point, yet. However, it is certainly a concern of ISP’s, and I believe a major reason why they exhibit such behavior. I do not believe it is some altruistic desire to be the police of the Internet. If there were no bandwidth problems I don’t believe the ISP’s would be particularly concerned about interfering in P2P traffic. They may not believe it is ethical or legal, but I doubt they would take it upon themselves to police their customers if it was not affecting their bottom line. However, it is an issue in certain locations / communities, and ISP’s are probably attempting to avoid issues in the majority of locations. So this is evidence that there is a bandwidth problem >somewhere< on their networks, and upstream seems to be the culprit.
Brett says himself that “I do have control over my local network and over my interface to my backbone provider. And when I prioritize VoIP on those, it helps tremendously.” I think this is an indication that there is, in fact, an issue.
So, what traffic do you drop? Drop more than two or three VoIP packet, depending on the encoding and other configuration issues, and you will notice voice effects. So QoS, while it may NOT have been required in Internet communications in the past, is becoming MORE important than it has ever been in the past. In the past, when low-speed connections (dial-up) were only possible, VoIP communications itself was not possible. Then there was a time of high-speed communications, with no contention. Now we are entering a time of high speed communications (no delay issue), but with contention (packet loss issues). To overlook this, or to allow the ISP’s to come up with their own solutions without input from others, is not in the best interest of anyone but the ISP’s.
Kevin McArthur [mailto:email@example.com]
Hmm. I have to agree with Brett on most of his comments. QoS is definitely
part of the IETF RFC's. And QoS is definitely required for VoIP, in any
network, for it to work properly. The problem is that there is no common
global, or for that matter national, agreement as to how classifications and
markings are done. Without that there would be little reason for the
various network owners to trust each other. There may be one-off agreements
between two ISP's or an ISP and a backbone carrier. However, unless there
is a national/global standard then we would never get to the point where
end-users can mark their own traffic as they see fit, and have those
markings honored throughout the Internet as long as they complied with their
agreement with their ISP.
I disagree when it comes to the intelligence of the network, and whether
network owners should be able to make policy as to what types of content is
appropriate just because the routers and other network infrastructure
devices have "intelligence." The Internet is an end-to-end network, not a
[mailto:firstname.lastname@example.org] On Behalf Of Brett
Sent: Friday, February 29, 2008 3:04 PM
To: Bob Frankston; email@example.com
Subject: [ NNSquad ] Re: INTELLIGENT network management? (far from IP)
At 12:28 PM 2/29/2008, Bob Frankston wrote:
If you require QoS for VoIP then you have the PSTN not the Internet.
QoS was (and is) part of the original design of the Internet. Note the
"Type of Service" fields in both IPv4 and IPv6, as well as the "push"
bit. (Interestingly, there's no "shove" bit. Don't know why. ;-) )
VoIP cannot rely on QoS because you don't have enough control over thenetwork.
I do have control over my local network and over my interface to my
backbone provider. And when I prioritize VoIP on those, it helps
And VoIP does not rely on QoS -- I verified this with Tom Evslinwho supplied much of the backbone for VoIP.
VoIP becomes nearly unusable in times of heavy loads without QoS. In
fact, it becomes unusable when someone on the same node runs
unthrottled BitTorrent. That's why we prioritize and do P2P mitigation.
Let's not base policies on misconceptions.
I agree. Hopefully the above will clear up some of those misconceptions.
Yes you can build your ownintelligent network but let's not confuse it with the Internet.
The Internet was never meant to be unintelligent. By design, it relies upon
routers which have tremendous computing power and very large amounts of
memory. (And it must. It can't scale or have redundancy without these
What's more, every end node is ITSELF a router and devotes intelligence to
that. It is unreasonable to attempt to exile technology, innovation, or
intelligence from any part of the Internet.
If VoIP fails it fails.
You may be able to say that, but we can't. We lose the customer if he or
she can't do VoIP.
And if you require real time HD streaming that may fail to. So what?
I believe that it was you who, in a previous message, were voicing
discontent with the performance of HD streaming on your FiOS connection.
We can't support HD streaming on the typical residential connection, but
we DO want to support it if the customer is buying sufficient bandwidth.
If we don't, again, we're out of business. Or someone goes to the FCC
and complains that we're not supporting that medium and must be