NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: [IP] Why I'm Skeptical of the FCC's Call for User Broadband Testing


----- Forwarded message from "John S. Quarterman" <jsq@quarterman.org> -----

Date: Thu, 11 Mar 2010 16:59:16 -0500
From: "John S. Quarterman" <jsq@quarterman.org>
Subject: Re: [IP] Why I'm Skeptical of the FCC's Call for User Broadband
	Testing 
To: dave@farber.net
Cc: "John S. Quarterman" <jsq@quarterman.org>, ip <ip@v2.listbox.com>,
	Lauren Weinstein <lauren@vortex.com>

Dave: for IP.

> > From: Lauren Weinstein <lauren@vortex.com>
> > Date: March 11, 2010 3:56:32 PM EST
> > To: dave@farber.net
> > Subject: Why I'm Skeptical of the FCC's Call for User Broadband  
> > Testing

...

> > After inspecting the associated site and testing tools, I'm must admit
> > that I am extremely skeptical about the overall value of the data
> > being collected by their project, except in the sense of the most
> > gross of statistics.
> >
> > In random tests against my own reasonably well-calibrated tools, the
> > FCC tools showed consistent disparities of 50% to 85%!  Why isn't this
> > surprising?

Because it's not relevant.

The differences between the relevant speeds, such as dialup,
iPhone or MIFI speeds, 1.5Mbps, 3Mbps, 6Mbps, 10Mbps, and 100 Mbps,
are so large that 50 - 85% for a single test out of many thousands
is nothing.

Even more to the point, tests by multiple subscribers to the same
service will give a pretty good idea of what that service is really
providing.  Even if some users test while somebody else is using
the same connection, others will not, so you can get a pretty good
sense of the maximum speed being provided.

> No obvious clues are provided to users regarding the underlying server
> testing infrastructure.  As anyone who uses speed tests is aware, the
> location of servers used for these tests will dramatically affect
> results.  The ability of the server infrastructure to control for
> these disparities can be quite limited depending on ISPs' own network
> topologies.

Without the drama, most bottlenecks are in the last connection to the
user, and the few percent difference caused by the long-haul infrastructure
is irrelevant for this purpose.

> And of course, on-demand, manually-run tests cannot provide any sort
> of reasonable window into the wide variations in performance that
> users commonly experience on different days of the week, times of day,
> and so on.

If you get enough such tests, yes, they can, across a range of users.

> Users are required to provide their street address information with
> the tests, but there's nothing stopping anyone from entering any
> address that they might wish, suggesting that such data could often be
> untrustworthy compared with (much coarser) already available IP
> address-based location info.

One would assume the FCC knows this and will do some cross-checks.

Lauren's objections illustrate the problem with most Internet metrics:
they're all about detailed precision.  That's great if you're trying
to, for example, tune individual routers.

For policy, what is needed is a large scale view that will show
much broader information.

As Lauren says:

>   While these tests under this methodology may serve to help categorize
>   users into very broad classes of Internet service tiers,

And that's the point, isn't it?

Especially compared to what the providers claim they're delivering.

-jsq

----- End forwarded message -----