NNSquad - Network Neutrality Squad
[ NNSquad ] Re: Google response to WSJ 12/15/08"Fast Track on the Web" story
|
When you fatten the pipes at the first and last mile, not only do you
enable traffic to exit the public Internet faster, you also enable it
to enter faster, which creates enough additional load to eliminate the
speed-up you gained by fattening the off-ramp in the first place. That's why fattening the pipes doesn't relieve congestion unless you do it asymmetrically, and that's a bad thing in its own right. In the long run, over-building the network to relieve congestion is too expensive to be practical in any setting other than a LAN. Sorry. RB Waclawsky John-A52165 wrote: I'm not so sure ....the core network has a lot of capacity via optics with the equipment at the end points supporting nearly 1 trillion bps today on a single "light pipe" (88 x 10 Gbps color) - this was unheard of just a few years ago. The electronics at the end of the fiber link is the limiting factor - Moores law does effect that. I asked the question at GlobeComm (two weeks ago): what was the theoretical capacity of a fiber link? I was told there was none (at least today) and everyone was shaking their head yes (100% consensus). I am always suspicious about claims of scarcity and never under estimate human ingenuity (in search of a buck) ability to solve the problem which I see as a loose corollary of Moore's law, IMHO :-) My 2 cents.. -----Original Message----- From: nnsquad-bounces+jgw=motorola.com@nnsquad.org [mailto:nnsquad-bounces+jgw=motorola.com@nnsquad.org] On Behalf Of George Ou Sent: Monday, December 15, 2008 3:24 PM To: 'John Bartas'; 'Lauren Weinstein' Cc: nnsquad@nnsquad.org Subject: [ NNSquad ] Re: Google response to WSJ 12/15/08"Fast Track on the Web" story John, content caching will always be the solution for large scale video distribution. Networks will get 10s, 100s, 1000s of times faster but the same relative bottlenecks will be there and we'll run in to the same problems as the bitrates of the video increases proportionally with capacity. Distributed caching solves the unicast problem which replicates transmissions millions of times over the same infrastructure and that's just silly. Caching will never become "obsolete" no matter how fast the network gets because it's a 10,000-fold performance multiplier. In fact, if network prioritization is the "fast lane", then content caching is the "warp lane" which exceeds the speed of light since you don't even have to bother re-transmitting the content. No, network prioritization is not a content accelerator because it doesn't prevent you from having to send the same content millions of times. "I spent most of 2007 working on a VoIP analysis device (Packet Island) and the vast majority of VoIP jitter problems we saw were caused by crappy local ISP networks" That's what I said in my network management report, jitter mostly happens where there is the largest bottleneck which will always be broadband. All Net Neutrality legislation proposed so far specifically targets broadband and prevents jitter-mitigation and legitimate bandwidth management techniques. Broadband by definition will always be the bottleneck because if broadband speeds up 10-fold, the core and distribution part of the Internet will also have to speed up 10-fold to keep up. So your theory that we can simply grow out of these bottlenecks with "Moore's law" (which talks about transistor count BTW) is simply wrong. George Ou -----Original Message----- From: nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org [mailto:nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org] On Behalf Of John Bartas Sent: Monday, December 15, 2008 2:53 AM To: 'Lauren Weinstein' Cc: nnsquad@nnsquad.org Subject: [ NNSquad ] Re: Google response to WSJ 12/15/08 "Fast Track on the Web" story There's some careless assertions in Georges post. First, content caching is not the ultimate anything; just a stopgap. Moore's law will accelerate the backbone with faster routers and media, whereas company-specific co-located servers will always have an overhead. Even if the co-lo server storage was free, the cost of a Chennai staff to manage the extra data copy is pretty well fixed, and eventually a faster backbone would make it uneconomical. Like all forms of caching, progress will either obsolete it or commoditize it. This day will come a lot faster if U.S. providers realize they are not going to make a killing holding people's content hostage with QoS schemes. Also "Network prioritization is designed for a totally different purpose but people confuse it for a content delivery mechanism when it isn't." is misleading. Network prioritization has a lot of flavors, and some can be a great content delivery accelerator. The duopoly shows every sign that they will use it this way as soon as they can get away with it. And jumping ahead, I see someone's still floating the argument that NN is bad for Jitter. I spent most of 2007 working on a VoIP analysis device (Packet Island) and the vast majority of VoIP jitter problems we saw were caused by crappy local ISP networks - too many hops and route flap between the phone and the backbone. The biggest ISPs are the worst. The backbone itself is fine. NN won't hurt jitter. Despite this, I'm inclined to agree with George that minor legislation is not the answer - Congress is clueless and the Duopoly PR machine is too good at muddying the water for a serious policy debate. Ultimately it will have to be a tightly regulated public utility; with a strict cap on profits. Only by stripping off the profit motive can the net stay free. -JB- George Ou wrote: -- Richard Bennett |