NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: Google response to WSJ 12/15/08"Fast Track on the Web" story


When you fatten the pipes at the first and last mile, not only do you enable traffic to exit the public Internet faster, you also enable it to enter faster, which creates enough additional load to eliminate the speed-up you gained by fattening the off-ramp in the first place.

That's why fattening the pipes doesn't relieve congestion unless you do it asymmetrically, and that's a bad thing in its own right. In the long run, over-building the network to relieve congestion is too expensive to be practical in any setting other than a LAN.

Sorry.

RB

Waclawsky John-A52165 wrote:
I'm not so sure ....the core network has a lot of capacity via optics
with the equipment at the end points supporting nearly 1 trillion bps
today on a single "light pipe" (88 x 10 Gbps color) - this was unheard
of just a few years ago. The electronics at the end of the fiber link is
the limiting factor - Moores law does effect that. I asked the question
at GlobeComm (two weeks ago): what was the theoretical capacity of a
fiber link? I was told there was none (at least today) and everyone was
shaking their head yes (100% consensus). I am always suspicious about
claims of scarcity and never under estimate human ingenuity (in search
of a buck) ability to solve the problem which I see as a loose corollary
of Moore's law, IMHO   :-)   My 2 cents..   

-----Original Message-----
From: nnsquad-bounces+jgw=motorola.com@nnsquad.org
[mailto:nnsquad-bounces+jgw=motorola.com@nnsquad.org] On Behalf Of
George Ou
Sent: Monday, December 15, 2008 3:24 PM
To: 'John Bartas'; 'Lauren Weinstein'
Cc: nnsquad@nnsquad.org
Subject: [ NNSquad ] Re: Google response to WSJ 12/15/08"Fast Track on
the Web" story

John, content caching will always be the solution for large scale video
distribution.  Networks will get 10s, 100s, 1000s of times faster but
the same relative bottlenecks will be there and we'll run in to the same
problems as the bitrates of the video increases proportionally with
capacity.  Distributed caching solves the unicast problem which
replicates transmissions millions of times over the same infrastructure
and that's just silly.  Caching will never become "obsolete" no matter
how fast the network gets because it's a 10,000-fold performance
multiplier.  In fact, if network prioritization is the "fast lane", then
content caching is the "warp lane"
which exceeds the speed of light since you don't even have to bother
re-transmitting the content.

No, network prioritization is not a content accelerator because it
doesn't prevent you from having to send the same content millions of
times.

"I spent most of 2007 working on a VoIP analysis device (Packet Island)
and the vast majority of VoIP jitter problems we saw were caused by
crappy local ISP networks"

That's what I said in my network management report, jitter mostly
happens where there is the largest bottleneck which will always be
broadband.  All Net Neutrality legislation proposed so far specifically
targets broadband and prevents jitter-mitigation and legitimate
bandwidth management techniques.

Broadband by definition will always be the bottleneck because if
broadband speeds up 10-fold, the core and distribution part of the
Internet will also have to speed up 10-fold to keep up.  So your theory
that we can simply grow out of these bottlenecks with "Moore's law"
(which talks about transistor count BTW) is simply wrong.



George Ou

-----Original Message-----
From: nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org
[mailto:nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org] On
Behalf Of John Bartas
Sent: Monday, December 15, 2008 2:53 AM
To: 'Lauren Weinstein'
Cc: nnsquad@nnsquad.org
Subject: [ NNSquad ] Re: Google response to WSJ 12/15/08 "Fast Track on
the Web" story

There's some careless assertions in Georges post. First, content caching
is not the ultimate anything; just a stopgap. Moore's law will
accelerate the backbone with faster routers and media, whereas
company-specific co-located servers will always have an overhead. Even
if the co-lo server storage was free, the cost of a Chennai staff to
manage the extra data copy is pretty well fixed, and eventually a faster
backbone would make it uneconomical. Like all forms of caching, progress
will either obsolete it or commoditize it. This day will come a lot
faster if U.S. providers realize they are not going to make a killing
holding people's content hostage with QoS schemes.

Also "Network prioritization is designed for a totally different purpose
but people confuse it for a content delivery mechanism when it isn't." 
is misleading. Network prioritization has a lot of flavors, and some can
be a great content delivery accelerator. The duopoly shows every sign
that they will use it this way as soon as they can get away with it. 

And jumping ahead, I see someone's still floating the argument that NN
is bad for Jitter. I spent most of 2007 working on a VoIP analysis
device (Packet Island) and the vast majority of VoIP jitter problems we
saw were caused by crappy local ISP networks - too many hops and route
flap between the phone and the backbone. The biggest ISPs are the worst.

The backbone itself is fine. NN won't hurt jitter.

Despite this, I'm inclined to agree with George that minor legislation
is not the answer - Congress is clueless and the Duopoly PR machine is
too good at muddying the water for a serious policy debate. Ultimately
it will have to be a tightly regulated public utility; with a strict cap
on profits. Only by stripping off the profit motive can the net stay
free.

-JB-

George Ou wrote:
  
Now you know why every Net Neutrality bill ever proposed all 
specifically target broadband and they don't apply to the type of 
non-neutral
    
advantages
  
that large dotcom companies can buy.

Content caching [usually in the form of Content Delivery Networks 
(CDN)]
    
is
  
ultimate fast track mechanism for content distribution.  Content 
caching
    
is
  
the only model that supports on-demand high quality video, not P2P or 
network prioritization.  Content caching shows why the Internet never 
has and never will be equal.  The Internet is only equal to those who 
can buy the same infrastructure but it's never been equal to everyone 
at any
    
price.
  
Richard Bennett also debunks this myth that everything has to be equal
    
here
  
http://bennett.com/blog/2008/12/google-gambles-in-casablanca/.  



Network prioritization is designed for a totally different purpose but
    

  
people confuse it for a content delivery mechanism when it isn't.  
Network prioritization ensures that a network can support multiple 
applications as well as possible.  That means bandwidth should be 
intelligently
    
prioritized
  
in favor of interactive applications with low duty cycles over 
background applications with non-stop usage.  That means background 
applications
    
aren't
  
affected in terms of average bandwidth but the interactive application
    

  
improves substantially.  This does not conflict with the purpose of
    
protocol
  
agnostic network management which is designed to ensure equitable 
distribution of bandwidth between customers of the same broadband 
service tier.  This system relies on a priority budget system to 
prevent users and application developers from abusing the system by 
labeling every packet as top priority.

The other purpose of network prioritization is to mitigate jitter 
(large spikes in packet delay) which can even occur at very low 
network
    
utilization
  
levels.  To fix this, we have to deliver the packets out-of-order such
    
that
  
the network toggles between packets of different applications at a 
higher rate which prevents real-time applications from timing out.  
Some will consider this "cutting in line" but it isn't because some 
applications
    
pack
  
the line with 10 to 100 times more packets and a smart network will
    
quickly
  
alternate between the different applications to prevent starvation. 

I cover this in my new report on network management released last
    
Thursday.
  
http://www.itif.org/index.php?id=205



The problem with Net Neutrality legislation is that they either try to
    

  
ban network prioritization outright (Wyden bill in 2006) or they try 
to
    
prohibit
  
differentiated pricing and give everyone priority regardless of source
    

  
(Snowe/Dorgan and Markey in 2006).  The anti-tiering legislation
    
effectively
  
breaks prioritization because if every packet is prioritized, then no 
one
    
is
  
prioritized.  If we can't look at the source of the packets, we can't 
determine whether people have exceeded their budgets and it's 
impossible
    
to
  
enforce a fair prioritization scheme.  If we can't have differentiated
    

  
pricing, then there's no effective way we can give people a priority
    
budget
  
which means there's no way to enforce a fair and meaningful 
prioritization scheme.  The end result is that all the Net Neutrality 
proposals make it impossible to have a network prioritization system 
which makes broadband a less useful network that multitasks poorly.



George Ou

-----Original Message-----
From: nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org
[mailto:nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org] On 
Behalf
    
Of
  
Lauren Weinstein
Sent: Sunday, December 14, 2008 9:52 PM
To: nnsquad@nnsquad.org
Cc: lauren@vortex.com
Subject: [ NNSquad ] Google response to WSJ 12/15/08 "Fast Track on 
the
    
Web"
  
story


Google response to WSJ 12/15/08 "Fast Track on the Web" story


    
http://googlepublicpolicy.blogspot.com/2008/12/net-neutrality-and-benefi
ts-o
  
f-caching.html

--Lauren--
NNSquad Moderator




  
    



  

-- 
Richard Bennett