NNSquad - Network Neutrality Squad
NNSquad Home Page
NNSquad Mailing List Information
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[ NNSquad ] Re: New P2P Privacy System from Univ. of Washington
- To: NNSquad <nnsquad@nnsquad.org>
- Subject: [ NNSquad ] Re: New P2P Privacy System from Univ. of Washington
- From: Barry Gold <bgold@matrix-consultants.com>
- Date: Tue, 24 Feb 2009 15:33:50 -0800
Richard Bennett wrote:
I think there's a big difference between technologies that can be
"abused by evil people" and those that are *meant to be used by
criminals in the commission of crime*. There's no legitimate reason to
mask the identities of the members of a P2P swarm in any free and
democratic country,
Ah. So there was no legitimate reason for the existence of
anon.penet.fi or any of the subsequent remailers that have been created
all over the world? Or for the various sites -- both free and pay --
that will mask your IP address when websurfing?
To me, this smacks just a little too much of the "honest people have
nothing to hide, so why are you objecting if we want to search your home
for contraband?"
Jacobson raises one of the interesting challenges for neutralists, to
wit: if you insist that protocols remain blissfully unaware of payload,
how, pray tell, do you deal with the challenge of popular and repetitive
content? In the not-too-distant future, TV delivery will shift almost
entirely to the Internet. Are we doomed to transmitting a unique copy of
the entire packet stream of each episode of "American Idol" to each of
its 50 million viewers, or can we relax the layering dogma enough to
cache copies of the stream close to the end user?
Solving this problem will require some awareness of the content by the
delivery system, and that's not a bad thing, is it? According to
neutralist dogma, it's the Original Sin. So the choice appears to be
this: efficient networks or neutral networks, pick only one.
I guess it depends on how you define Network Neutrality -- the central
question that this list is about.
But Bennett does have a point. If it is illegal (or even immoral or
fattening) for ISPs to look any further than the IP header when handling
packets, then a lot of strategies like caching cannot be implemented.(*)
This is why I have abandoned total opposition to DPI. Obviously, DPI
can be beneficial. It is only when it is used to the detriment of the
user that I object to it.
I suppose, really, we should add a bit to the header saying whether
intermediaries are allowed to look inside the packet. But given how
long it is taking merely to make the jump from IPv4 to IPv6, I don't see
that happening any time soon.
(*) I should note that caching has the interesting property that
_nobody_ is harmed. The person trying to fetch the cached page gets
better response because it is delivered from a (topologically) nearby
host instead of the originating host many hops further away. And other
users of the same node or ISP also benefit because valuable tier 1 lines
are used for non-repetitive traffic instead of being tied up with
multiple copies of the same image or video.
This assumes, of course, that the ISP caches _only_ things that are
truly repetitive, correctly respecting markers for when the data need to
be re-fetched from the remote host.