NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Protocol transparency vs performance


I think people are confusing two very distinct issues.

The first issue, the one that is most properly referred to as "network
neutrality" (or "non-neutrality") is the transparency (or lack of transparency)
of the network to different applications. Comcast's selective attack on Bit
Torrent is a classic example of such a discriminatory attack on a particular
application protocol. The DSL provider that blocked a competing VoIP service
(and got slapped down by the FCC) some years ago is another example.

The second issue, very distinct from the first, involves traffic limits,
latency, quality of service and charging rates and policies completely distinct
from traffic content and applications. The first issue involves the meaning and
destination of the bits that I send and receive. The second issue involves the
*quantity* of bits that I send and receive. Even though certain application
protocols are often associated with heavy resource usage, these are still
distinct topics.

I can think of no legitimate reason for a carrier to be permitted to be
non-neutral in the first sense. They should not be allowed to prefer one
application over another, or to block an application entirely. (Remember, I'm
talking only about the meaning of the bits being sent, not their quantity, which
is a separate issue.)

Yes, there are issues related to illegal network uses but these really aren't
the ISP's concern.

The sole exception is when a user has specifically asked an ISP to filter
certain application ports for security and/or denial-of-service protection. For
example a Microsoft user might ask their ISP to block downstream ports 136-139
and 445. Absent such explicit user request and permission, an ISP should not
filter *anything*.

Fortunately, the users have a very strong defensive weapon in their arsenal:
encryption. If the users' data is completely opaque, then carriers cannot tell
what application is in use (except by its traffic patterns, which again is a
separate issue). If they can't tell what application is in use, then they cannot
selectively block it.

The ideal mechanism is IPSEC, as opposed to transport layer encryption, e.g.,
SSL or TLS, or application layer encryption, e.g., PGP or S/MIME. IPSEC encrypts
everything above the IP layer including the transport and application headers.
There could even be a second IP header, as in a tunnel, that would also be
encrypted.

The per-packet authentication feature built into IPSEC ESP (the Encapsulating
Security Protocol) is specifically designed to block the kinds of forged packets
that Comcast's Sandvine boxes generate to sabotage Bit Torrent transfers. This
is highly preferable to violating the TCP protocol at the endpoints by ignoring
TCP resets. Some Bit Torrent users have already noticed that the reset problem
goes away when they run over a VPN. This is why.

If the carrier were to try to filter IPSEC running directly above IP on its
standard IP protocol numbers 50 (ESP) and 51 (AH), this would break VPNs and
corporate telecommuters would scream. They could switch to running IPSEC above
UDP with arbitrary port numbers, as they already do to get through NATs.

IPSEC could also run on TCP port 443 (ordinarily SSL-encrypted HTTP). If this
were blocked, secure web traffic and e-commerce would grind to a halt, and
Amazon, eBay and all the other e-tailers would scream. Since we all know that
the only remaining legitimate use of the Internet is to buy stuff with credit
cards, this would be utterly unthinkable.

This leaves issue #2: traffic volume and latency. Here the carriers do have
legitimate concerns. While I agree that the best long term fix is to provide
lots of excess capacity, this does not seem likely in the short term. So some
form of well-engineered QoS seems necessary. I have already described what I
believe is a good, multi-tiered QoS scheme that would make both the P2P users
and the ISPs happy.

Ultimately, however, the capacity problem is a symptom of a lack of meaningful
competition in the local broadband market. Allowing the same monopolies to own
local transmission and to provide retail services over them was a major policy
blunder. The only way out is to separate transmission, which should be either
government owned or privately owned and regulated as a common carrier, from the
provision of unregulated, competitive, commercial retail services. We were well
on our way to this approach in the late 1970s and early 1980s when the "Reagan
Revolution" happened and it all collapsed.

By analogy, roads and highways are generally built by governments who make them
open to the public. Commercial entities then use them to provide competitive
transportation services to the public, and they reimburse the government through
fuel, registration and road use taxes. Even though there is only one set of
roads, there are many commercial trucking and taxi companies to choose from.

And so it should be with local broadband. Ideally a municipality could build a
dark fiber network that would then be made available to any commercial service
provider willing to pay the standard tariffed rates. If you don't like one
provider, there would be others. The service provider's payments to the
municipality would pay off the bonds that originally funded the	construction of
the fiber network and for maintenance, and the municipality would stay out of
the retail service market. Again, my model is the existing road network that
most everyone agrees has worked well.