NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: [IP] a wise word from a long time network person -- Merccurynews report on Stanford hearing


------- Forwarded Message

From: David Farber <dave@farber.net>
To: "ip" <ip@v2.listbox.com>
Date: Mon, 21 Apr 2008 16:42:39 -0700
Subject: [IP] Re: a wise word from a long time network person --
         Merccurynews report on Stanford hearing

________________________________________
From: Tony Lauck [tlauck@madriver.com]
Sent: Saturday, April 19, 2008 1:48 PM
To: David Farber
Subject: Re: [IP] a wise word from a long time network person -- Merccurynews report on Stanford hearing

There will always be the potential for congestion in *any* shared system
that is not grossly over configured. This means there will always be the
possibility for congestion in any ISP's network if that ISP has the
slightest chance of running a viable business.  Therefore, and this is
the part where I'm sure Brett and I agree, there will *always* be the
necessity to manage congestion in an ISP's network.

I have no objection to Comcast's managing its network performance. My
objection has been to the *form* of Comcast's management, namely the
forging of RST packets. I have also objected to Comcast and others
demonizing particular applications protocols or network users. I
particularly object to those who criticize the Internet Architecture or
IETF without a thorough understanding of the technical issues. I first
began working in the area of network congestion management in 1977 when
I became chief network architect at Digital Equipment Corporation. In
the course of my career at DEC I was instrumental in steering a number
of researchers into this area, including Raj Jain and K.K. Ramakrishnan,
as well as developing several patents of my own. At the time I told
these researchers that this could be a career field if they wanted and
not just a project.

While many aspects of network performance have become engineering
issues, there are still others that are more properly research issues.
Because of the complexity of this area, in my opinion the FCC would be
ill advised to promulgate regulations that affect congestion management.
On the other hand, I would have no problem with the FTC enforcing
transparent customer agreements.

With dedicated links such as DSL, congestion can, should and is managed
at the access multiplexer or router.  With dedicated links, congestion
appears in the form of a queue inside an intelligent device. At this
point, IETF congestion management mechanisms come into play, and
performance can be managed by queue discipline and discard policy.
However the actual policies are not specified by the IETF, because they
are what determine "fair" access. The entire concept of "fair" access
depends on what constitutes a "user" and what constitutes "fair" service
for that user. This is something that is determined jointly by the ISP
and the customer when a customer signs up for network service. All I ask
is that these policies be something that ordinary customers as well as
network experts can understand. This precludes policies that allow only
"reasonable" usage or that disconnect customers for "excessive" usage,
without defining these terms. In addition, if usage is limited, than I
would expect that the ISP provides customers with simple tools to
monitor their usage.  These can be similar to the control panel usage
monitors provided by shared web hosting companies.

As Brett correctly points out, there is at least one other potential
bottleneck or cost accumulation point, namely the ISP backbone access
link(s). (Depending on geographic considerations, the cost of backbone
bandwidth may be more or less significant than last mile costs.) Routers
attached to backbone access links can use queue management disciplines
to enforce per customer fairness or this can be done at the access
router or access multiplexer. Alternatively, backbone access can be
monitored and users can be discouraged from excessive usage by usage
based tariffs. All I ask is that these charges be open and that the
users have a simple way to monitor their usage.

Brett has raised a third issue, which is that distributed uploading by
P2P networks is inefficient and uneconomic compared with more
centralized approaches. This may be true in some instances, particularly
with rural networks. However, when looking at the relative costs of
multiple approaches it is important to consider *all* the costs
involved. These include more than the uplink costs associated with P2P
networks. They include the costs associated with uploading data to
traditional web and ftp servers, the costs of running these servers and
the costs of bandwidth these servers use in sending files. In some cases
P2P mechanisms will be more efficient than centralized servers. Two
examples come immediately to mind:  (1) A home user "publishing" a file
that is never accessed. If a centralized server is used there will be a
totally unnecessary network transfer uploading the file.  (2) A home
user sharing an extremely popular file with many other ISP customers.
Here the P2P network may reduce the number of copies downloaded over the
ISP's backbone access links.

I am encouraged by Comcast's newly stated intention to cooperate with
Bittorrent. There are significant economies to be realized if all the
players cooperate. Unfortunately, there are other factors that may come
into play, for example Copyright issues that may prevent ISPs from
running their own P2P caching clients.

Tony Lauck
www.aglauck.com

[snip]