NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: New P2P Privacy System from Univ. of Washington


Ou engages in egregious misrepresentation of the technical matters he suggests. I question his expertise and credentials based on this performance. I suspect his goal is FUD, since he lists a collection of implausible, infeasible, and lunatic hypotheticals to shift the argument to one about "values" where he can talk about good "ends" that would justify a slippery slope toward means.

Technical comments regarding the achievability of the proposed "motivations" hypothesized are intercalated below. Essentially, Vint Cerf is right. If you want a memorable example: DPI can't stop pictures of children naked being consumed by predators either. It doesn't matter if the cause is good if the means don't achieve the goal within acceptable policy and economic bounds! Invoking bad things as reasons for a technically unsound and infeasible remedy sounds like a marketing campaign in the drug industry: "Got spam? Buy DPI! It cures all your ills, just trust George Ou".
George Ou wrote:
DPI is good when we use it to:
* Inspect content to detect and block virus or malware signatures
*Blocking* is not technically feasible, contrary to claims. In order to block such content, one would have to buffer many sequential packets in a flow, holding all packets in a buffer, and only deliver packets when a unit of application data (such as an email) is complete. While programes like driftnet and etherpeek can sometimes copy such systems, a DPI based system that observed TCP packets *while buffering them* would so seriously disrupt the TCP flow control that the end system would experience disruption. As Vint Cerf said, this can only be done by higher level protocols at endpoints.

Detection of content that spans multiple packets is barely possible: requires massive history and reassembly: a packet inspecting appliance in the network is really a massively ineffective place to do it.
* Inspect content to detect and block denial of service payloads
Denial of service typically involves large numbers of packets, not specific payloads. Packet counting based on destination is not "deep packet inspection" - it's inspecting IP headers, which routers can do because it's outtside the envelope.

Payloads that result in denial of service are tied very closely to applications. Servers that are conservative on what they accept as requests (checking length fields, validity of fields) control their own.

Conclusion: blocking infeasible, detection possible but in the wrong place.
* Inspect content to detect and block spam
Analysis is same as viruses above, plus the important issue that spam is marketing material, and the recipient may not want a censorious ISP deciding that marketing of perfectly legal things via email is "bad". Let the recipient route his/her mail through a spam filter, where it is less costly to do the filtering than it is at the packet level, and the filtering can be chosen by the user based on his/her definition of "unwanted" in the official spam definition: bulk, unwanted, commercial email.
* Inspect content to detect replicate data to cache data so that unicast
audio/video delivery scales
Taking end-to-end traffic streams apart after buffering them to reassemble the entire stream, to do a "diff" to determine that the data is the same is also infeasible. Today every kind of media is capable of being personalized - in a web browser you don't see the same page view that everyone else does, because the source personalizes the data for each customer (ad insertion, if nothing else, is done at the source server). This is also true of video media, the dandy of "multicast" aficionados who think people watch video live in a 1950's 3 network model.
Complex media cannot be transparently cached. Any caching that is useful is source-controlled, using app layer assembly - which Akamai and others support.
* Inspect explicit DiffServ labels to properly prioritize traffic
Diffserv labels are not deep packet inspection. They are IP protocol labels, and standardized independent of application. They are on the envvelope.

Conclusion here: author doesn't understand diffserv.
* Inspect protocol headers to determine implicit prioritization label in the
absence of explicit priority labels
There is no technical term: "implicit prioritization label". The idea that one can infer priority required by end user from random inputs like port numbers is out there in the culture, but there are no studies that show psychological intent can be inferred by reading protocol headers. This like reading entrails.
* Inspect content to offer targeted advertising to pay for free wireless
broadband
Since one cannot do this in real time, it depends on an assumption: that a single user shares the IP address, and therefore can be understood by the stream of all packets coming from that address. Advertisers (maybe not ISPs?) want to know people or context, not IP addresses. Google is in a good place because one search query, coupled with a cookie that tracks the particular personal computer being used, provides strong targeting. The DPI approach is costly and less targeted. Financially a weak proposition.

No clarity on how insertion might happen based on DPI. Would either need to share information with servers (sell to Google or other ad based companies) or do forcible insertion by changing expected content. For example, intercepting HTTP GETs by DNS forgery and returning other content.

Google and other app layer systems already tell their users their policies with respect to this data, and one can choose not to use Google. Not ttrue for ISPs.
* Inspect content to offer targeted advertising to pay for free cloud email
e.g., Gmail
Same as above. App services can do it better, user knows they do it, and user chooses.
* Inspect content to offer targeted advertising when user explicitly agrees
to terms and conditions
User has an easy way to let this happen, as with Gmail, just buy your content services ion the form of an server-based application, from a vendor who adds that "feature". Technically easy, no DPI required.

DPI is bad when we use it to:

* Inspect content to offer targeted advertising to users without disclosure
or permission from user
The disclosure/permission issue is real. But the technical issues are the same.