NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: [OT?] NN definition(s?)


At 08:04 AM 11/13/2007, Jonas Bosson wrote:
 
>Last week we had a first consumer vs ISP marketing case up in court here in Sweden. The ISP lost. The consumer used a broadband test site that is used to compare operators at a central hub in Sweden. The ruling says nothing about how far down the pipes the bandwidth should remain.

This would set a bad precedent. As Nick says, an ISP such as myself cannot deliver $30/month or $40/month access with reasonable speed and response time without traffic shaping. Suppose I am lucky enough to be able to get backbone bandwidth for $150 per megabit per month -- a good price for these parts. (I know many ISPs who are paying much more.) I can't let users monopolize a megabit per second continuously, or a $30 customer can cost me 5 times what he or she is paying me. If I allocate $15 of his or her fee to bandwidth (really too large a percentage, given our costs), then he could get about 100Kbps continuously. If I just impose a flat throttle which limits him or her to that speed, he or she will be unhappy with response times.

What to do? Shape traffic, allowing users to peak at a higher rate and then throttling them back. Our rates include a CIR as well as an average throughput. Frankly, it hurts us to be truthful and open about this. If a user takes the average as "the" speed of the connection and compares what is quoted to what the cable companies and telcos advertise (they always advertise the maximum raw data rate of the modem), we look slower. (Of course, this is like comparing microprocessors based on "megahertz," but consumers don't realize this.) What's more, the CIR is necessarily low, because we can only reserve so much bandwith for any user when he or she is not active. So, when prospective customers read THAT column of the table, they stand to be turned off even more if they don't fully understand what it means. (We are thinking of eliminating it from the table because of this problem.) You can see why the telcos and cable companies aren't quoting such figures. 

What's more, the results you get from those "benchmark" sites can easily be jiggered. Our local cable company somehow always comes out at 8 Mbps, even if you run the test 10 times (which would impose a cap if you did it with other sites). Hmmm.

Also, there's another aspect of throughput that isn't often mentioned: packets per second. Every packet has a header and a checksum, and needs to be stored in memory (which must be allocated for it). On the physical medium, packets often also have a preamble; it's necessary to allow the receiver to synchronize with the transmitter. (This isn't just true of wireless; it's also true of cable modems and many forms of DSL.) So, lots of short packets consume a LOT more resources in routers, switches, etc. than larger ones. And some activities -- like VoIP and gaming -- actually impose a much higher load than you'd think if you just added up the number of bits being transported. We have to throttle based on PPS, too.

Finally, there's the damage done by the "swarming" behavior of P2P. Even if you drop many of the packets sent by hungry computers running GNUtella and similar protocols, they have already consumed your downstream backbone bandwidth before you blocked them! In short, they waste your resources before you can throttle. It's a big problem.

As I have already mentioned, HTTP with caching is far more efficient than P2P anyway. Think about it: a protocol like BitTorrent doubles, at minimum, the number of bits our network has to transport to get a file to a user. And because upstream bandwidth is far more precious than downstream bandwidth, the effective cost is actually much more than doubled. And we don't get a chance to cache... not that we would want to cache BitTorrent because in essence we would be maintaining an archive of illegally reproduced copyrighted material and hence facilitating theft. And that's a shame. A cache would actually reduce the number of bits we had to transport on our backbone link to get a file to a user.

--Brett Glass, LARIAT.NET