NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Google, Net Neutrality, and Fairness

                       Google, Net Neutrality, and Fairness


Greetings.  In "Evil or misunderstood? Google and net neutrality"
posted earlier today ( http://bit.ly/c8BTWA [Quoderat] ), David
Megginson speculates that a key issue in the now widely-known (but
little understood) recent "secret" meetings between Google and Verizon
might have been discussions regarding giving particular types of
content differing priorities -- what I'd call Quality of Service 
(QoS) -- rather than negotiations about giving priority to any particular
content providers per se.  David also argues that content-based
priority (e.g., giving video priority over e-mail) may make technical
sense but is still a bad idea.

I have mixed feelings about all this.  Theoretically, given sufficient
bandwidth and well-behaved applications, QoS shouldn't really ever
have to be an issue.  After all, QoS is fundamentally a mechanism to
manage scarcity.  In the presence of abundance, QoS basically should
be unnecessary.

Of course in the real world we have bandwidth limitations,
bottlenecks, and the reality that not all Internet-based apps are
necessarily well-behaved in their data usage -- so the concept of QoS
probably needs to be on the table for discussion at least.

But it also seems clear that QoS (especially relatively
straightforward implementations) has a number of potential 
problems -- some of which David discusses.

There are fundamentally two ways to do QoS -- either you analyze the
traffic (based on actual content, traffic patterns, or both, perhaps
using DPI - Deep Packet Inspection) -- and/or you base your QoS
decisions on flags generated by content providers themselves.

What makes this all so complicated is that we must assume the presence
of "bad actors" on the Net who will attempt to game any QoS system

Also, moves toward pervasive, ubiquitous encryption by default -- a
concept for which I'm a strong proponent -- could significantly
interfere with "automatically determined" QoS, putting more reliance
on accurate tagging of data by content providers.  Even traffic
analysis can be fooled via various techniques.

The upshot of all this is -- gosh darn it! -- these issues are
anything but trivial, and anyone who tells you otherwise is either
misinformed or being purposely misleading.

Small wonder then, that trying to discuss these issues publicly, with
all the money and emotions tangled up in the topic, tends to have a
comfort level similar to that of sticking your hand down deep into the
blades of a running blender.

While I don't presume to offer any magic wand solutions to these
dilemmas, issues of content prioritization vs. Internet data content
types are ignored at our peril.

If we do end up moving toward QoS systems of some sort, the real
challenge will be finding ways to implement such mechanisms that do
not interfere with users' ability to make full use of encryption, are
as minimally vulnerable to unfair manipulation as possible, and that
do not create distortions resulting in anticompetitive or other unfair

A tall order.

And that's the truth.

Lauren Weinstein (lauren@vortex.com)
Tel: +1 (818) 225-2800
Co-Founder, PFIR (People For Internet Responsibility): http://www.pfir.org
Co-Founder, NNSquad (Network Neutrality Squad): http://www.nnsquad.org
Founder, GCTIP (Global Coalition for Transparent Internet Performance): 
Founder, PRIVACY Forum: http://www.vortex.com
Member, ACM Committee on Computers and Public Policy
Lauren's Blog: http://lauren.vortex.com
Twitter: https://twitter.com/laurenweinstein
Google Buzz: http://bit.ly/lauren-buzz