NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] When Web Experiments Violate User Trust, We're All Victims


       When Web Experiments Violate User Trust, We're All Victims

             http://lauren.vortex.com/archive/001078.html


If you ever wonder why it seems like politicians around the world
appear to have decided that their political futures are best served by
imposing all manner of free speech restrictions, censorship, and
content controls on Web services, one might be well served by
examining the extent to which Internet users feel that they've been
mistreated and lied to by some services -- how their trust in those
services has been undermined by abusive experiments that would not
likely be tolerated in other aspects of our lives.

To be sure, all experiments are definitely not created equal. Most Web
service providers run experiments of one sort or another, and the vast
majority are both justifiable and harmless. Showing some customers a
different version of a user interface, for example, does not risk real
harm to users, and the same could be said for most experiments that
are aimed at improving site performance and results.

But when sites outright lie to you about things you care about, and
that you have expected those sites to provide to you honestly, that's
a wholly different story, indeed -- and that applies whether or not
you're paying fees for the services involved, and whether or not users
are ever informed later about these shenanigans. Nor do "research use
of data" clauses buried in voluminous Terms of Service text constitute
informed consent or some sort of ethical exception.

You'll likely recall the recent furor over revelations about Facebook
experiments -- in conjunction with outside experimenters -- that
artificially distorted the feed streams of selected users in an effort
to impact their emotions, e.g., show them more negative items than
normal, and see if they'll become depressed.

When belated news of this experiment became known, there was
widespread and much deserved criticism. Facebook and experimenters
issued some half-hearted "sort of" apologies, mostly suggesting that
anyone who was concerned just "didn't understand" the point of the
experiment. You know the philosophy: "Users are just stupid losers!" ...

Now comes word that online dating site OkCupid has been engaging in
its own campaign of lying to users in the guise of experiments.

In OkCupid's case, this revelation comes not in the form of an apology
at all, but rather in a snarky, fetid posting by one of their
principals, which also includes a pitch urging readers to purchase the
author's book.

OkCupid apparently performed a range of experiments on users -- some
of the harmless variety. But one in particular fell squarely into the
Big Lie septic tank, involving lying to selected users by claiming
that very low compatibility scores were actually extremely high
scores. Then OkCupid sat back and gleefully watched the fun like
teenagers peering through a keyhole into a bedroom.

Now of course, OkCupid had their "data based" excuse for this. By
their claimed reckoning, their algorithm was basically so inept in the
first place that the only way their could calibrate it was by
providing some users enormously inflated results to see how they'd
behave, then studying this data against control groups who got honest
results from the algorithm.

Sorry boy wonders, but that story would get you kicked out of Ethics
101 with a tattoo on your forehead that reads "Never let me near a
computer again, please!"

Really, this is pretty simple stuff. It doesn't take a course in
comparative ethics to figure out when an experiment is harmless and
when it's abusive.

Many apologists for these abusive antics are well practiced in the art
of conflation -- that is, trying to confuse the issue by making
invalid comparisons.

So, you'll get the "everybody does experiments" line -- which is true
enough, but as noted above, the vast majority of experiments are
harmless and do not involve lying to your users.

Or we'll hear "this is the same things advertisers try to do --
they're always playing with our emotions." Certainly advertisers do
their utmost to influence us, but there's a big difference from the
cases under discussion here. We don't usually have a pre-existing
trust relationship with those advertisers of the sort we have with Web
services that we use every day, and that we expect to provide us with
honest results, honest answers, and honest data to the best of their
ability.

And naturally there's also the refrain that "these are very small
differences that are often hard to even measure, and aren't important
anyway, so what's the big deal?"

But from an ethical standpoint the magnitude of effects is essentially
irrelevant. The issue is your willingness to lie to your users and
purposely distort data in the first place -- when your users expect
you to provide the most accurate data that you can.

The saddest part though is how this all poisons the well of trust
generally, and causes users to wonder when they're next being lied to
or manipulated by purposely skewed or altered data.

Loss of trust in this way can have lethal consequences. Already, we've
seen how a relatively small number of research ethical lapses in the
medical community have triggered knee-jerk legislative efforts to
restrict legitimate research access to genetic and disease data --
laws that could cost many lives as critical research is stalled and
otherwise stymied. And underlying this (much as in the case of
anti-Internet legislation we noted earlier) is politicians'
willingness to play up to people's fears and confusion -- and their
loss of trust -- in ways that ultimately may be very damaging to
society at large.

Trust is a fundamental aspect of our lives, both on the Net and off.
Once lost, it can be impossible to ever restore to former levels. The
damage is often permanent, and can ultimately be many orders of
magnitude more devastating than the events that may initially trigger
a user trust crisis itself.

Perhaps something to remember, the next time you're considering lying
to your users in the name of experimentation.

Trust me on this one.

--Lauren--
Lauren Weinstein (lauren@vortex.com): http://www.vortex.com/lauren 
Founder:
 - Network Neutrality Squad: http://www.nnsquad.org 
 - PRIVACY Forum: http://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility: http://www.pfir.org/pfir-info
Member: ACM Committee on Computers and Public Policy
I am a consultant to Google -- I speak only for myself, not for them.
Lauren's Blog: http://lauren.vortex.com
Google+: http://google.com/+LaurenWeinstein 
Twitter: http://twitter.com/laurenweinstein
Tel: +1 (818) 225-2800 / Skype: vortex.com
_______________________________________________
nnsquad mailing list
http://lists.nnsquad.org/mailman/listinfo/nnsquad