NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: [IP] Why I'm Skeptical of the FCC's Call for User Broadband Testing



Jason is Executive Director, Internet Systems, at Comcast 

--Lauren--
NNSquad Moderator

----- Forwarded message from Jason Livingood <jason_livingood@cable.comcast.com> -----

Date: Thu, 11 Mar 2010 17:43:18 -0500
From: Jason Livingood <jason_livingood@cable.comcast.com>
Subject: Re: [IP] Why I'm Skeptical of the FCC's Call for User Broadband
	Testing
To: Dave Farber <dave@farber.net>, ip <ip@v2.listbox.com>,
	lauren@vortex.com

> Dave: Lauren raises some fair points below.  Additional comments inline below
> (I have cut out some of his text so this isnıt too long of a message).
> 
> - Jason Livingood
> 
>> From: Lauren Weinstein <lauren@vortex.com>
>> <snip>
>> After inspecting the associated site and testing tools, I'm must admit
>> that I am extremely skeptical about the overall value of the data
>> being collected by their project, except in the sense of the most
>> gross of statistics.
>> 
>> [JL] I recommend the Commission add to their form a question about what OS is
>> being used on the customerıs PC, and whether their LAN connection is wired or
>> wireless.  In many cases today, I observe broadband users testing over WiFi,
>> where things such as distance and interference come into play, in addition to
>> which flavor of WiFi is being used and whether any WiFi security is
>> configured.  There are countless other LAN and PC-related things that
>> dramatically influence speed results (web browser, memory, other apps
>> running, HD space, other computers in use, etc.).
>> 
>> In random tests against my own reasonably well-calibrated tools, the
>> FCC tools showed consistent disparities of 50% to 85%!  Why isn't this
>> surprising?
>> 
>> [JL] I tend to agree with you and I think this also at least partially
>> explains why the comScore results that have been cited by the Commission also
>> show a difference similar to what you observe (there are other reasons).
>> 
>> <snip>
>> The FCC testing regime ( http://bit.ly/9IuQeC [FCC] ) provides for no
>> control related to other activity on users' connections.  How many
>> people will (knowingly or not) run the tests while someone else in the
>> home or business is watching video, downloading files, or otherwise
>> significantly affecting the overall bandwidth behavior?
>> 
>> [JL] Very true!  Those things can obviously greatly impact speed
>> measurements.  
>> 
>> No obvious clues are provided to users regarding the underlying server
>> testing infrastructure.  As anyone who uses speed tests is aware, the
>> location of servers used for these tests will dramatically affect
>> results.  The ability of the server infrastructure to control for
>> these disparities can be quite limited depending on ISPs' own network
>> topologies.
>> 
>> [JL] It seems essential to understand how the test selects between Ookla and
>> M-Labs, how many servers are behind each test, how those servers are
>> configured, whether they are doing other tasks, and how the tests are
>> configured (number of connections, file sizes used, etc.).  Even if some of
>> those things may be disclosed on Ookla or M-Labsı websites, it seems like
>> something worth specifying in FAQs on the same site as the test itself.
>> Other than the initial selection decision-making, the other factors mentioned
>> are major influencers on the accuracy of any speed measurement system.
>> 
>> And of course, on-demand, manually-run tests cannot provide any sort
>> of reasonable window into the wide variations in performance that
>> users commonly experience on different days of the week, times of day,
>> and so on.
>> 
>> [JL] Indeed, and such tests have a self-selection bias.  In addition, the
>> tests have no ability to determine whether the speed you are shown is close
>> to your provisioned (marketed) speed.  So there is some question as to what
>> the resulting data will lead you to conclude.  If everyone in a certain ZIP
>> code shows an average of X speed, are we to conclude that is good or bad?  Is
>> it because they all subscribe to a service at Y speed (where Y>X) or is the a
>> difference between what they think they should be getting and what they are
>> getting (and your questions above dig into whether that is due to factors
>> within the userıs control or within the ISPıs control).  And how can you
>> control for the fact that many tests are likely to be run at peak hour?
>> 
>> <snip>
>> ISPs may be justifiably concerned that the data collected from these
>> tests by this FCC effort may be unrepresentative in significant ways.
>> 
>> [JL] Indeed.  I suspect we will all learn more next week about what direction
>> this is all heading in.  


----- End forwarded message -----