NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: Cato / Public Citizen: Tim Wu is wrong claiming searchengines aren't protected by the First Amendment (Lauren Weinstein)

Lillie - I think I agree with most of your sentiments.


I'm not a lawyer at all.  I take pride in being a Professional Engineer.  (Capitalized because there is a great tradition in engineering *as a profession* where ethics and obligation to society play a major, if not the *dominant* role.)  If you study the history of the engineering profession, especially with respect to the role of engineer as expert, you find that engineering values derive from a great humanist tradition of taking responsibility.


Engineering professional societies are unanimous in stating that engineers are *accountable* for their work in society.  When we design and build things, we ought to be liable.  When I was at Lotus, the accuracy of calculations that our spreadsheets did for users was something I took incredibly seriously, and I felt personally responsible for potentially disastrous errors that might arise if our software calculated a wrong answer that led to human harm (as it might in a medical or engineering lab).  The company may have used a shrink-wrap license that disclaimed liability, but that did not change my responsibility and accountability.  It's why I personally lobbied (with others) for Intel to spend 10's of millions of dollars *recalling the Pentium chip* for its floating point flaw and it's why Andy Grove agreed with me - once I/we showed him that it really was not a 1 in a gazillion issue in real cases that could be showed in a formula that one could type into a spreadsheet, but a case that could be significant in real calculations.  As Vice President and Chief Scientist, I could have just issued a press statement supporting our partner Intel when we were asked to by Intel's PR firm - nothing would have happened to me or Lotus.  I did not have an economic incentive - so by the Law and Economics view of rational behavior, I was crazy.


Of late, I would argue that engineers and their professional values are a lot stronger than lawyers and their values as a profession.  Most lawyers seem to want to find any way possible to shirk responsibility, and their clients pay big bucks for avoiding any responsibility to their fellow humans.


So we gain little from "The Law as a Profession" and lose a lot from the lack of an actual Legal Profession among the lawyers that make policy and legislation.


I have never met a patent attorney who is part the patent bar who thinks that a patent is primarily about advancing the state of the art (its stated purpose).  They spend a lot of time advising how little needs to be "taught" about the patented invention in the filing, how to build a "patent wall" to create a business monopoly, ...  or on the defense side, I've never met a patent attorney who would advise a client that their own creative invention can stand the test of non-infringement or that a patent was clearly invalid based on prior art and obviousness.   So what patents seem to be is nothing but a "game" parasitically dependent on engineers.


The largest and highest paid bunch of lawyers are in DC.  Do they add value to the engineering profession or to the engineering of "good things" in society - I doubt it.  The fact is that they don't listen to engineers.  They assume that "venture capitalists" and CEOs are expressing the best views of engineering professionals - yet those two professions have only one skill - financial expertise.


Regarding such things as "privacy" - many of my fellow engineers have spent a lot of effort building systems that try to ensure that "the right to be left alone" is honored.   We do it in companies as well as "free software collectives".  I served (as a callow grad student in the early 1970's) as an advisor to the first group of policy and legislative people to address computer data privacy rules, in Massachusetts.  That law and its principles, based on sound engineering understandign of computers at the time, eventually became the model for the US framework on Computers, Privacy, and the Rights of Citizens.  And it eventually informed the European privacy rules.  From that work we got the idea of PII.  We engineers explained that "information about" people should not be treated according to "property law", because "information about" is not created by individuals.  Yet there is an amzing idea from Lawyers (mostly the Law and Economics cult) that somehow "you should own your own data", argued that when you "ender data into the system you have a choice to sell or not to sell what is done with it".


IMHO, that idea is *Legal Malpractice* at the level of the Legal Profession.  It's a false premise leading to a simplistic conclusion that is entirely bad for society.  Information about me is created by people other than me. Privacy is not ultimately about controlling the physical representation of bits in memory and limiting their transmission.  An engineer who has studied information theory knows the difference between a "bit" as a measure of information and a "bit" of physical storage which is part of a "rector".  A lawyer, apparetnly, does not have to understand that difference, and worse, can appeal to the notion of "bit" (which is a unit of measure) to cause bizarre legal conclusions by conflating "records" with "information".  Yet a copyright lawyer knows the difference between idea and _expression_.  They just don't apply that to "information" vs. "records".


I value the US First Amendment (which discusses free speech, free *assembly*, and free belief).   But that amendment is not intended to affect the keeping of records, the freedom of exchange of ideas, etc. *at least to a professional engineer*.


Let's engineer a good society.   Let's not let Google twist our freedoms into protections against liability.




-----Original Message-----
From: "Lillie Coney" <coney@epic.org>
Sent: Sunday, June 24, 2012 10:07am
To: dpreed@reed.com
Cc: "NNSquad - Network Neutrality Squad" <nnsquad@nnsquad.org>
Subject: Re: [ NNSquad ] Re: Cato / Public Citizen: Tim Wu is wrong claiming searchengines aren't protected by the First Amendment (Lauren Weinstein)

My point is that it does not matter to the consumer
if it is first amendment speech as long as the speaker
is held accountable for their speech.  My second
point is that the issues is very serious if Internet
Service Providers attempt to have the same rights
as individuals, when they obviously do not share
anything related to the general user's experience.
You are correct that the law is still in flux with the
users not in the game at all.  Some courts are starting
to see these cases and must decide who owns the
input that may be converted by service providers into
more profitable unrelated services. e.g. pictures loaded
on the profiles of Facebook users is converted into a facial
recognition applications.  Where the images owned
by the users or the company that provided the platform
that allowed the images to be shared.  The company's
original terms of service claimed no such right, but it
was later changed to appropriate the images and other
content.  User reaction posed a serious problem that
an online process to allow consumer input on the
drafting of the terms of service, which I would add
was controlled by the company resulted in a new terms
of service agreement.
The power of the service provider is in the network
and applications that make it possible for consumers
to use computing technology in innovative ways. The
questions is this legal.  If we move the discussion to
physical space and say how about the activity of people
who enter a store, restaurant or hotel would the same
power to collect retain and use consumer activity apply.
It is just as intimate and likely very sensitive information.
Is the consumer expectation the same in physical and
virtual space? The reasonable person test is the key
to what the answer will be in the long term.
Shrink wrap licensing agreements support the idea that
consumers are taking on the complete risk of downloading
and using all software while the developer or those who
configure the individual coding contributions of many
programs remains free of the consequences of errors.
There is a point reach in each adoption of new technology
where there is no expectation that the inventor should provide
a societal requirement to protect the users: automobiles,
telephones, airplanes, electrical appliances until they
reach a certain saturation point where their malfunction
had greater consequences for society.
With computing code and computerized systems could they
be designed and built better -- yes. Will computing system
be built better than they are is the question.
The beginning of the industry there were no rules except meeting
the new release deadline--the expectation was that any problems
would be fixe with patches--thus "Patch Tuesday" was born. Every
start-up that wanted to make it followed the same process--knowing
full well that they had flaws in their software--holes that if exploited
or the wrong set of circumstances occurred there would be failures.
Thus the deployment of shrink wrap licenses.
We know that there are better ways to build complex computing
systems- e.g. larger commercial aircraft are complex computing
The only way these systems will be better are regulations that
require that they are better.  Microsoft has worked very hard to
improve its software at a great cost and many of the issues
around early product release practices were corrected.  Later
companies like Google have had a few known problems, but a
lack of obligation to disclose issues means there may be more.
The Internet is the ultimate mixing bowl for new software and
applications and there consequences for computing devices
is mixed.  In secure environments there is zero tolerance for
applications (apps) and small computing devices (smartphones
or thumb drives) because they introduce risks.
We should be very serious about how software, firmware and
hardware is constructed and hold companies accountable for
the functioning of their products.  Further the content of users
should have the same protections as personal property. The
allusion of privacy and ownership should not be fostered by
service providers if they have no intention of honoring it because
users would not provide information they do not intend to
relinquish.  But there in rests the problem--is users did not
believe they were in a private and controlled space for their
use they would not act in certain ways which is of value to
micro-targeting for marketing purposes.  The value of
gathering millions of users and their personal lives is not
the system that attracts them, but the computing systems
ability to collect user data without their knowledge that
makes the online environment--the ultimate human sticky
There are a lot of issues that the court or policy makers
must decide that will make the resolution of these issues
very interesting.
On Jun 23, 2012, at 10:59 PM, dpreed@reed.com wrote:



-----Original Message-----
From: "Lillie Coney" <coney@epic.org>
Sent: Saturday, June 23, 2012 9:58pm
To: dpreed@reed.com
Cc: "NNSquad - Network Neutrality Squad" <nnsquad@nnsquad.org>
Subject: Re: [ NNSquad ] Re: Cato / Public Citizen: Tim Wu is wrong claiming searchengines aren't protected by the First Amendment (Lauren Weinstein)

There is speech that is not protected--if speech causes
reputation or physical injury to another there are

How many courts have ruled that computer code is
a form of speech? Computing heavily relies upon
Mathematics. Mathematics is viewed as the only
universal form of speech. We know that if construction
and civil engineers who rely on mathematics are wrong
they are held accountable for their errors.

Search engine searches should be viewed in the same
light as searching for a book in a library. But they should not
be owned by the search engine provider, but the searcher. If
it is not then this vital information resource for self and world
exploration will not be fully realized. The chilling effect on search
engine searches being treated as public information is obvious--
seeking information regarding very private matters--medical
conditions, employment (while still employed), research
on a wide range of topics from simple curiosity to professional
or educational interest would be under a light.

I understand why Internet Search Engine providers may
want to argue that the searches belong to them--they are valuable
commodities in micro-targeting driven online environment. These
search request are worth billions. But the company would also not
want to be responsible for searches that are deemed to be illegal
in some countries -- certain topics are criminal offenses.

There are no real firm numbers on how much profit is in
selling data that include all forms of information on users//consumers/
citizens, but the estimates are in the billions. If users owned
the information being sold that would present some serious
difficulty for the data broker industry.

Computer code is copyright protected, however many
developers are moving to patents for additional protection
of their work.


On Jun 23, 2012, at 11:38 AM, dpreed@reed.com wrote:

> I have a simple question, if Cato and Public Citizen claim that
> algorithms written by humans are protected by the first amendment
> because they are authored by humans ...
> If the algorithm discriminates against the conversations of (say)
> black people, is it protected speech, or a violation of civil rights?
> The argument by Cato/Public Citizen presumes that the search results
> are spoken by the Google company. Yet Google claims that it is not
> prosecutable for libel under the laws of the UK? Yet Google claims a
> purely human *authorship*.
> This argument by Cato/Public Citizen includes a significant element of
> sophistry - a rhetorical equating of human voluntary speech to an
> automatic process making a decision to synthesize a result in a
> simplistic scenario - yet in contrast Tim Wu discusses the precedent
> setting nature of giving automatic processes programmed by humans a
> *First Amendment* right!
> Sophistry is meant to confuse and conflate. I don't know why Cato and
> Public Citizen want to confuse and conflate - perhaps merely to retain
> alignment with power?
> [ David, there are a number of aspects to the issues you've invoked.
> The UK view of libel, which unlike the US system does not permit
> a defense of truth, is one factor. Also, it appears to me that
> ultimately you are conflating to some degree the issues of search
> results per se vs. the contents those results refer to.
> But a more basic question is why you (apparently) don't feel that
> search results should have the same level of first amendment
> protection as, say, newspapers, magazines, and other media, which
> have traditionally had such protection, and also express opinions,
> make "best of" recommendations, and so on.
> If your concern is specifically the use of automated algorithms,
> I would suggest it is misplaced. The algorithms are merely the
> embodiment of the values and opinions of their human creators,
> and I see no reason why those opinions -- as exercised through
> algorithms -- should not have the same level of protections as
> any other opinions.
> The alternative, to suggest that opinions as expressed through
> algorithms should not be subject to first amendment protections,
> would seem onerous indeed.
> -- Lauren Weinstein
> NNSquad Moderator ]
> _______________________________________________
> nnsquad mailing list
> http://lists.nnsquad.org/mailman/listinfo/nnsquad

nnsquad mailing list
nnsquad mailing list