Richard,
thanks for a nicely worded contribution. I am a proponent of looking at regulatory practices from a layering point of view, driven in part by a preference (belief?) that one ought to be able to limit the information needed to properly service the packet to the part exposed, e.g., at the IP layer. If the system doesn't work with the IP payload fully encrypted end-to-end, then we don't have the right information at the IP layer. So I think you and I might agree that the DPI question centers on the adequacy of the IP layer exposed information to provide enough information to the IP packet switching system to do a satisfactory job of serving the traffic. There ARE "type of service" bits in the header of the IP packet format and in IPv6 there is even a "flow ID". In all the years I have been associated with the Internet, the utilization of the TOS bits has been relatively low. I think part of the reason for that is that there haven't been any agreements reached among ISPs as to what business model to associate with a particular TOS other than "best efforts."
I am not persuaded yet that looking deeper into the packet helps much if you can't even use the naturally exposed TOS bits to decide whether to treat one packet differently from another. I readily understand that there are other clues that also drive treatment decisions. For example, control traffic might naturally get high priority because such packets are needed to manage the network. That information is also visible in the IP packet format without having to look into the payload if memory serves.
vint
Vint Cerf
Google 1818 Library Street, Suite 400 Reston, VA 20190 202-370-5637
On Feb 25, 2009, at 5:19 PM, Richard Bennett wrote: That goes to the heart of DPI's role in the Internet apostasy. There's no reason a priori why anybody should care whether any particular packet inspection is deep or shallow, and no particular reason that packet inspection of any kind should be frightening; packets get "inspected" many times over between source and destination, and in fact the reason their headers are so nicely formatted is to facilitate inspection. The reason DPI gets people so excited is that it "violates" the layering of the Internet protocols, which can only mean one of two things: 1) the party jumping across his boundary and looking at somebody else's information is doing a bad thing; or 2) the functions necessary to run a network aren't properly partitioned by the Internet protocol stack. There's an awful lot of work being done on network protocols that suggests that 2) is correct, regardless of the propriety of 1). The layered model isn't something that only has interest to protocol hacks; would-be Internet regulators have been fiddling with versions of an Internet regulatory model based on layering since the early 00's. The old model for network regulation in the US is technology-based, which is why you have these Title I and Title II regulatory battles going on. Now that several technologies are capable of delivering a similar set of services, that model has to go, but what replaces it? Somehow the regulators need a compass that points toward goodness and some hope the layered model is such a compass. This is based more on hope than experience. The battle to impose a ban on DPI is in large part a religious contest that asserts the correctness of '70s network architecture and at the same time asserts the authority of a new generation of regulators to constrain Internet conduct. Beware of criticizing anyone's religion. RB George Ou wrote: First of all, most of the higher layer stuff is trickling down in to the networking devices and/or they're being offloaded to attached or inline devices.
Second, does it even matter what layer and what device and which company handles this content inspection? Or is it only OK if some companies do this but not others?
George Ou
-----Original Message-----
From: nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org [mailto:nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org] On Behalf Of Vint Cerf
Sent: Wednesday, February 25, 2009 2:48 AM
To: george_ou@lanarchitect.net; lauren@vortex.com; richard@bennett.com
Cc: nnsquad@nnsquad.org; paul.w.forbes@gmail.com
Subject: [ NNSquad ] Re: New P2P Privacy System from Univ. of Washington
Many if not most of these are not accomplished with DPI but must be
implemente with higher layer protocols. V
[ Bingo!
-- Lauren Weinstein
NNSquad Moderator ]
----- Original Message -----
From: nnsquad-bounces+vint=google.com@nnsquad.org <nnsquad-bounces+vint=google.com@nnsquad.org>
To: 'Lauren Weinstein' <lauren@vortex.com>; 'Richard Bennett' <richard@bennett.com>
Cc: nnsquad@nnsquad.org <nnsquad@nnsquad.org>; 'Paul Forbes' <paul.w.forbes@gmail.com>
Sent: Tue Feb 24 17:15:27 2009
Subject: [ NNSquad ] Re: New P2P Privacy System from Univ. of Washington
So here we are with the debate on DPI.
DPI is good when we use it to:
* Inspect content to detect and block virus or malware signatures
* Inspect content to detect and block denial of service payloads
* Inspect content to detect and block spam
* Inspect content to detect replicate data to cache data so that unicast
audio/video delivery scales
* Inspect explicit DiffServ labels to properly prioritize traffic
* Inspect protocol headers to determine implicit prioritization label in the
absence of explicit priority labels
* Inspect content to offer targeted advertising to pay for free wireless
broadband
* Inspect content to offer targeted advertising to pay for free cloud email
e.g., Gmail
* Inspect content to offer targeted advertising when user explicitly agrees
to terms and conditions
DPI is bad when we use it to:
* Inspect content to offer targeted advertising to users without disclosure
or permission from user
George Ou
[ I don't know whose set of good/bad values that's supposed to be.
You? Verizon? Eric Schmidt? Rush Limbaugh? Wendy Carlos?
It's certainly not mine.
-- Lauren Weinstein
NNSquad Moderator ]
-----Original Message-----
From: nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org
[mailto:nnsquad-bounces+george_ou=lanarchitect.net@nnsquad.org] On Behalf Of
Lauren Weinstein
Sent: Tuesday, February 24, 2009 6:02 PM
To: Richard Bennett
Cc: nnsquad@nnsquad.org; Paul Forbes
Subject: [ NNSquad ] Re: New P2P Privacy System from Univ. of Washington
On 02/24 14:15, Richard Bennett wrote:
I think there's a big difference between technologies that can be
"abused by evil people" and those that are *meant to be used by
criminals in the commission of crime*. There's no legitimate reason to
mask the identities of the members of a P2P swarm in any free and
democratic country, and no chance of doing so anywhere else.
This is a remarkable statement. I assume Richard means it in the
context of illicit use -- but of course what is meant by "illicit"
varies widely. In some countries, negative comments about the
leadership can get you thrown into a dungeon or your brains blown out
by government edict, courtesy of a bullet to the base of your skull.
But we know that all use of P2P is not illicit by most definitions,
and the percentage of illicit material in overall P2P usage is
(according to the figures I've seen) dropping at a significant rate.
For those of us who still believe in the Fourth Amendment to the U.S.
Constitution (obviously an ever smaller cult), the concept of
anonymity is important, since around that revolves much of the entire
problem of unreasonable search and seizure, and the related protection
of legal activities.
This Washington U. stuff is just garbage.
I'm sure the U. of W. team appreciates your respectful technical
analysis of their efforts, Richard.
Are we doomed to transmitting a unique copy of
the entire packet stream of each episode of "American Idol" to each of
its 50 million viewers, or can we relax the layering dogma enough to
cache copies of the stream close to the end user?
Bullpucky. That's a completely specious argument, and you know your
technology well enough to realize that. So do most people reading this
list, I'll wager.
Solving this problem will require some awareness of the content by the
delivery system, and that's not a bad thing, is it? According to
neutralist dogma, it's the Original Sin. So the choice appears to be
this: efficient networks or neutral networks, pick only one.
No, the real choice is being honest about technological realities, vs.
psuedo-political spins leading us toward technology's inner circle
of hell.
There are a multitude of topologies that would well serve mass
distribution of media content over the Internet that could be deployed
without creating the kind of anticompetive, inappropriately skewed and
limited frameworks that have become the center of the current
neutrality debate. The question is whether or not these topologies
can be economically and effectively deployed given the existing
warped, largely unreglated Internet telecom landscape that we must
build upon to move forward.
--Lauren--
NNSquad Moderator
--
Richard Bennett
|