NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: [IP] Re: a wise word from a long time network person -- Merccurynews report on Stanford hearing



On Apr 22, 2008, at 5:30 PM, Brett Glass wrote:

At 05:42 PM 4/21/2008, Tony Lauck wrote:

There will always be the potential for congestion in *any* shared system
that is not grossly over configured. This means there will always be the
possibility for congestion in any ISP's network if that ISP has the
slightest chance of running a viable business. Therefore, and this is
the part where I'm sure Brett and I agree, there will *always* be the
necessity to manage congestion in an ISP's network.

Yes, I do agree.

I have no objection to Comcast's managing its network performance. My
objection has been to the *form* of Comcast's management, namely the
forging of RST packets.

My objection has been to the use of the pejorative term "forging" or "forgery." A RST packet is a perfectly good and legitimate way of informing the ends of a TCP socket that it is being terminated.


Um, no, a RST is a legitimate method of *one endpoint* of a session informing *the other endpoint* that the state-machine is out of whack -- probably because the sequence numbers are not correct (this is a general form of not having any session at all).


The source address field of an IP packet is where the IP address of the sending machine goes -- it can be though of as the signature line in a letter, the from address on an envelope, etc. Putting anything other than an IP address that belongs to one of your interfaces on it is falsely claiming to be someone that you are not... What would you prefer that packets that are falsely claiming to be from a device that they are not be called? "Spoofed" comes close, but does not really cover the intent of the sending machine, which is to trick the receiving machine that the packet came from someone else...

To understand why, think about what would happen if the socket were
merely blocked by firewalling.

If the socket were blocked by a firewall there is a perfectly good, RFC defined method of informing the sending machine -- in fact there are a bunch:


ICMP Type 3 (Destination Unreachable)
+-- Code 0 (Net Unreachable) [RFC792]
+-- Code 1 (Host Unreachable) [RFC792]
+- Code 2 (Protocol Unreachable) [RFC792]
+-- Code 3 (Port Unreachable) [RFC792]
+-- Code 9 (Communication with Destination Network is Administratively Prohibited) [RFC792]
+-- Code 10 (Communication with Destination Host is Administratively Prohibited) [RFC792]
+-- Code 13 (Communication Administratively Prohibited) [RFC1812]


You choose the correct ICMP code depending on it situation, and send it *from your own IP*, no spoofing necessary...

The two sides would retry... and retry...
and retry before giving up. And by doing so, they'd congest the
network -- defeating the very purpose of terminating the socket.

Only if you are working from the assumption that the termination of a session by an intermediate device due to congestion is in some way reasonable.


If there *is* some legitimate reason for an intermediate device to terminate an existing session (eg: someone suddenly applies an ACL) it should send the appropriate ICMP messages and stop forwarding the packets -- yes, the endpoints may continue retrying for a bit -- the amount of retried traffic as compared to normal traffic tiny...

W

    [ Just to save some time, I'll note here that proponents of RST
      manipulation/forging by ISPs routinely argue that (in their
      opinions) ICMP is too often blocked to be generally useful in
      these situations.

             -- Lauren Weinstein
                NNSquad Moderator ]

 - - -

RST
packets, on the other hand, inform the two sides that the socket has
been terminated and there is no point in continuing to retry. Fast,
efficient, and actually better for the ends (in terms of resource
consumption) than the alternative.


I have also objected to Comcast and others
demonizing particular applications protocols or network users.

Again, the pejorative term "demonizing."

While it is possible to block rogue applications without knowing what
they are, it only makes sense to apply knowledge of those applications'
characteristics and behavior if one has that knowledge. Just as a
virus checker has "patterns" that can help it identify and remove
an undesirable application from the user's computer, a bandwidth
management appliance can and should be able to identify an application
that is hogging bandwidth. In fact, it's better, because if the goal
is merely to throttle the application back rather than stopping it cold,
this makes it possible. Knowledge always helps. In fact, had the
bandwidth limiting appliance used by Comcast had a greater knowledge
of protocols and done more careful identification of applications,
there would not have been a problem with Lotus Notes on its networks --
a problem for which it was harshly criticized.


I particularly object to those who criticize the Internet Architecture

I note the capital letters here, as if there were some edict from on high that was infallible or perfect.

or IETF without a thorough understanding of the technical issues.

In what way do those critics fail to understand the technical issues?

While many aspects of network performance have become engineering
issues, there are still others that are more properly research issues.
Because of the complexity of this area, in my opinion the FCC would be
ill advised to promulgate regulations that affect congestion management.
On the other hand, I would have no problem with the FTC enforcing
transparent customer agreements.

On this, we agree.

With dedicated links such as DSL, congestion can, should and is managed
at the access multiplexer or router.

This may not be sufficient. Congestion may occur elsewhere in the network.


With dedicated links, congestion
appears in the form of a queue inside an intelligent device. At this
point, IETF congestion management mechanisms come into play,

There is only one widely implemented "IETF" congestion management mechanism, alas. And it is one that operates at the ends.

The entire concept of "fair" access
depends on what constitutes a "user" and what constitutes "fair" service
for that user. This is something that is determined jointly by the ISP
and the customer when a customer signs up for network service.

On this we also agree. We tell users that their terms of service on a
residential connection include a prohibition against P2P or the operation
of servers.


All I ask
is that these policies be something that ordinary customers as well as
network experts can understand.

Unfortunately, it is often the application providers who prevent them from
understanding it. When a user installs the "downloading" software that lets
him or her access content, he or she may not be properly informed that the
software turns the machine into a server -- consuming its resources and
violating the user's contract with the ISP.


As Brett correctly points out, there is at least one other potential
bottleneck or cost accumulation point, namely the ISP backbone access
link(s). (Depending on geographic considerations, the cost of backbone
bandwidth may be more or less significant than last mile costs.) Routers
attached to backbone access links can use queue management disciplines
to enforce per customer fairness or this can be done at the access
router or access multiplexer. Alternatively, backbone access can be
monitored and users can be discouraged from excessive usage by usage
based tariffs.

As I stated in my remarks to the FCC:

Some parties claim that we should meter all connections by the bit. But this would be bad for consumers for several reasons. Firstly, users tell us overwhelmingly that they want charges to be predictable. They don't want to worry about the meter running or about overage charges -- one of the biggest causes of consumer complaints against cell phone companies. Secondly, users aren't always in control of the number of bits they download. Should a user pay more because Microsoft decides to release a 2 gigabyte service pack for Windows Vista? Or because Intuit updates Quicken or Quickbooks? Or because a big virus checker update comes in automatically overnight? We don't think so. And we don't need to charge them more, so long as they are using their bandwidth just for themselves. It's when third parties get hold of their machines, and turn them into resource-consuming servers on our network without compensating us for those resources, that there's a problem. Thirdly charging by t!
he!
bit
doesn't say anything about the quality of the service. You can offer a very low cost per bit on a connection that's very unsteady and is therefore unsuitable for many things users want to do -- such as voice over IP. And finally, a requirement to charge by the bit could spark a price war. You can just imagine the ads from the telephone company: $1 per gigabyte. And then the ads from the cable company: 90 cents per gigabyte. And then one or the other will start quoting in "gigabits" to make its price look lower, and so on and so forth. All Internet providers will compete on the basis of one number, even though there's much more to Internet service than that.


The problem is, small ISPs cannot win or even compete in this price war, especially when -- as is true in most places -- the monopolies backhaul their connections to the Internet and thus control their prices. Again, we wind up with duopoly.

All I ask is that these charges be open and that the users have a simple way to monitor their usage.

Interestingly, when Rogers Cable attempted to do just this -- to warn users of impending overage charges by placing messages in their browser windows -- the "Network Neutrality Squad" jumped on them for "tampering" with Web pages.


Brett has raised a third issue, which is that distributed uploading by
P2P networks is inefficient and uneconomic compared with more
centralized approaches. This may be true in some instances, particularly
with rural networks.

It is true in general. The network overhead is always greater, and the cost of the bandwidth at any "end" is always more expensive than it is at a co-location site at the backbone.


However, when looking at the relative costs of
multiple approaches it is important to consider *all* the costs
involved. These include more than the uplink costs associated with P2P
networks. They include the costs associated with uploading data to
traditional web and ftp servers, the costs of running these servers and
the costs of bandwidth these servers use in sending files.

All of these costs are lower than for P2P.

In some cases
P2P mechanisms will be more efficient than centralized servers. Two
examples come immediately to mind: (1) A home user "publishing" a file
that is never accessed.

This is a waste no matter what. But it is likely to be rare, and it is a tiny waste compared to the huge amounts of waste caused by P2P.


--Brett Glass


--
There were such things as dwarf gods. Dwarfs were not a naturally religious species, but in a world where pit props could crack without warning and pockets of fire damp could suddenly explode they'd seen the need for gods as the sort of supernatural equivalent of a hard hat. Besides, when you hit your thumb with an eight-pound hammer it's nice to be able to blaspheme. It takes a very special and straong- minded kind of atheist to jump up and down with their hand clasped under their other armpit and shout, "Oh, random-fluctuations-in-the- space-time-continuum!" or "Aaargh, primitive-and-outmoded-concept on a crutch!"
-- Terry Pratchett