No consent needed for ISPs to scan email and create ad-targeting profiles of non-customers
Most times, judges get it right, or at least right enough. Sometimes, judges get it wrong. When that happens, you can only hope that the next time the issue is contested in court, that adjudicator shows better judgment.
The most recent example of a ruling that flies in the face of logic and fairness is one issued by U.S. District Court Judge Lucy H. Koh of the Northern California District. As Ars Technica’s David Kravets writes in post dated today, Judge Koh recently approved the proposed settlement in the class-action suit In re Yahoo Mail Litigants (13-cv-04980-LHK, N.D. Cal. Jun 08, 2015; pdf). The decision awards the lawyers representing the plaintiffs $4 million, while the four named plaintiffs receive $5000 each. The other class members get nothing.
They don’t even get the satisfaction of knowing the contents of the email they send to and receive from people who use Yahoo’s mail services will no longer be scanned, a practice that the plaintiffs claim violates the California Invasion of Privacy Act (CIPA). Judge Koh’s decision requires only that Yahoo scan the email while it is at rest on Yahoo’s servers rather than while the messages are in transit. According to Judge Koh, this restriction allows the practice to comply with CIPA.
The upshot is that Yahoo can still scan the contents of messages sent to and received by non-Yahoo customers to Yahoo Mail addresses, but they have to wait until the messages arrive at Yahoo mail servers. This delays the process and costs Yahoo some money to implement, but it does nothing to protect the privacy of non-Yahoo customers who never granted the company permission to scan their email, whether in transit or in situ.
Next up on the email-scanning docket: Google
As Ars Technica’s Kravets points out, the ruling in In re Yahoo Mail Litigants may affect another case pending in Judge Koh’s court: Matera v. Google Inc., (15-cv-04062-LHK, N.D. Cal. Aug 12, 2016; pdf). pdpEcho’s Gabriela Zanfir provides extensive background and analysis of the case in an August 20, 2016, article. Zanfir was reacting to Koh’s denial of Google’s motion to dismiss based on the merits of the plaintiff’s case.
An earlier case making similar claims against Google’s scanning of email sent by non-Google customers to Gmail addressees was dismissed because the court ruled the plaintiffs did not establish a “class.” In Matera, the class of plaintiffs allege that Google scans email for two purposes: to target ads to the message senders or recipients; and to create user profiles to “advance Google’s profit interests,” according to the plaintiffs’ filing.
Issue number one: Is scanning the content of emails for the purpose of targeting ads at the sender and receiver “ordinary course of business,” as Google claims, and thus an exception to the Wiretap Act (18 U.S. Code § 2511)? The court ruled that the exceptions apply only to activities that are directly related to the provision of the underlying service, which in this case is email. Thus scanning the content of the email is not directly related to the service’s primary activity of delivering email and is not excepted from the Wiretap Act.
Google claims that targeted advertising is required to generate the revenue that pays for the service, thus it is a necessary business practice. The company also includes targeted advertising in the category of “routine and legitimate commercial behavior.” Zanfir points out that the court found otherwise and further ruled that “it is untenable for electronic communication service providers to ‘self-define’ the scope of their exemption from Wiretap Act liability.”
Also, the court pointed out that Google doesn’t scan email sent via Google Apps for Education because doing so would violate prohibitions against creating user profiles of underage students for purposes that include serving them targeted ads. (In a May 5, 2015, post I described how Google was creating profiles of students using GAFE in violation of the Family Educational Rights and Privacy Act.)
Issue number two: Does California’s Invasion of Privacy Act apply to email? Google claims the act doesn’t apply to new technologies in general. The court disagrees, finding that the California Supreme Court has ruled that when faced with two possible interpretations of CIPA, it applies the interpretation that “provides the greatest level of privacy protection.”
More broadly, in Apple Inc. v. Superior Court, 56 Cal. 4th 128 at 137 (2013), the court ruled the following:
“Fidelity to legislative intent does not ‘make it impossible to apply a legal text to technologies that did not exist when the text was created’…. Drafters of every era know that technological advances will proceed apace and that the rules they create will one day apply to all sorts of circumstances they could not possibly envision.”
Section 630 of the California Penal Code makes the legislature’s intention crystal clear:
“The Legislature hereby declares that advances in science and technology have led to the development of new devices and techniques for the purpose of eavesdropping upon private communications and that the invasion of privacy resulting from the continual and increasing use of such devices and techniques has created a serious threat to the free exercise of personal liberties and cannot be tolerated in a free and civilized society.”
Section 631 of the Code makes a party liable if the person “reads, or attempts to read or to learn the contents or meaning of any message, report or communication while the same is in transit or passing over any wire, line or cable, or is being sent from or received at any place within this state.” In Matera, the court cites two other cases in its district that applied Section 631 to “electronic communications similar to email”: In re Facebook Internet Tracking Litigants, 140 F. Supp. 3d at 936 (2015); and Campbell v. Facebook Inc., 77 F. Supp. 3d at 836 (2014).
Considering that Zanfir previously worked for the European Data Protection Supervisor in Brussels, Belgium, it’s understandable that she would take a decidedly European Union perspective on the practice of scanning the contents of email and creating user profiles based on the scan results. For her and other Europeans involved in privacy protection, the issue is whether scanning email and profiling users for marketing purposes infringe on our fundamental rights to privacy and to protection of our personal information.
For an increasing number of courts and regulators, the answer is a resounding “yes.”
Six slick click picks (say it three times real fast)
There’s so much going on right now, it won’t all fit in a single Weekly. If you’ve got the time, check out these links.
The private surveillance industry is going gangbusters, but what’s not so clear is what’s happening with all that surveillance video these for-profit companies are collecting. As TechCrunch’s Devin Coldewey writes in an August 29, 2016, article, a company called Persistent Surveillance worked with the Baltimore Police Department to conduct aerial surveillance on the entire city using high-resolution cameras.
Because Persistent Surveillance is a private company, there are questions about transparency, disclosure, and availability of the resulting video by defendants in criminal cases. That’s not to mention Stingray interception of cell phone calls, facial-recognition databases, encryption back doors, and other surveillance being done without our knowledge or consent. As Ziegler writes, the issue isn’t the surveillance itself, which many people would consent to if asked. The true issue is that “[w]e are being deceived systematically and deliberately.”
Artificial intelligence systems are destined to perpetuate existing human biases. That’s the conclusion of Princeton University researchers Arvind Narayanan, Aylin Caliskan-Islam and Joanna Bryson, who recently released a draft paper titled Semantics derived automatically from language corpora necessarily contain human biases.
The researchers used a technique called word embeddings to map similar words near each other in a “300-dimensional vector space.” Then the researchers look for human biases in the word relationships by using the Implicit Association Test that qualifies words as pleasant or unpleasant, names by ethnicity, and other subjective measures. An egregious example of the language bias is the association of specific professions with one gender or the other.
The researchers conclude that the language bias is so pervasive, an AI system will require a separate unbiasing system to neutralize the effect. In fact, such a system could be modeled on the human ability to learn not to act on implicit biases. A much more difficult task is removing the biases from humans altogether. These days, that seems near impossible, but never say never.
In the April 26, 2016, Weekly I wrote about the FCC’s new restrictions on ISP use of their customers’ private data, noting that Facebook, Google, and other big web services got a much undeserved pass. Harvard Law School’s Laurence Tribe took the FCC to task in a speech he made to the Media Institute on July 21, 2016 (pdf). The Hill’s David Blato reports on the speech in an August 24, 2016, post on the Congress Blog.
According to Tribe, the FCC’s patchwork of privacy regulations is an affront to not one, not two, but three fundamental Constitutional rights: First Amendment freedom of speech, Fourth Amendment freedom from unwarranted searches, and Fifth Amendment equal protection guarantees. Other than that, no problem.
Tribe stated that the FCC proposal is “a nakedly anti-consumer measure, rather than a pro-privacy measure, and it can’t survive First Amendment scrutiny.” The professor is definitely not sold on the value of opt-ins, saying they’re “not worth the cost”:
“Doing so for the small percentage of likely affirmative responses will simply be too expensive to be worthwhile. The result will be artificial inefficiencies introduced into the flow of truthful and valuable information, and thus an unnecessary blockage to the free flow of that information. And that’s exactly the result the FCC rule I’ve been discussing would inevitably yield.”
The consensus of the experts on the tech side and the legal side is that the FCC go back to the drawing board, and this time, put the interests of consumers first. Oh, and issuing rules that don’t violate the Constitution would be nice, too.
Last week’s Weekly described efforts by the creator of the web, Sir Tim Berners-Lee, and others to return the network to its decentralized roots. In an August 28, 2016, article, the Guardian’s John Naughton writes that by praising Sir Tim on the 25th anniversary of the Web last week, Facebook founder Mark Zuckerberg raised the bar on hypocrisy to stratospheric levels. In his congratulatory message, Zuckerberg thanked Berners-Lee “and other Internet pioneers for making the world more open and connected.”
So the guy who built the biggest walls in the history of the “open” web congratulates the guy who intended the web to be wall-less. Chutzpah.
By the way, Sir Tim has no idea why August 23, 1991, is celebrated as the web’s birthday. He believes the World Wide Web dates back to a proposal he made to his employer, CERN, in 1989. Personally, I think the web is stuck in the unruly-teenager stage.
I don’t believe in much, but what I do firmly and solemnly avow is the rule of law. That’s why what’s happening right now in the Philippines is so distressing to me. An essay in the August 13, 2016, issue of The Economist highlights the ultimate futility of such extrajudicial activities: “[I]t will lead only to more misery.”
“Mr Duterte would have the world believe that the Philippines’ corrupt and ineffective police have suddenly become omniscient—able to tell innocence from guilt and decide who may live and who should die…. [E]xtrajudicial violence resolves nothing and makes everything worse. Innocent people will be killed…. The rule of law will erode. Investors, who have made the Philippines one of globalisation’s winners in recent years, will flee. The only winners will be the still-lurking insurgents. Mr Duterte’s ill-conceived war on drugs will make the Philippines poorer and more violent.”
I would remind Mr. Duterte that a fish stinks from the head. If you really want to address corruption and lawlessness, start at the top.
Finally, the Caselaw Access Project is an effort by the Harvard Law School Library to make more state and federal court decisions available for free online. The project began in 2013 as “Free the Law.” I like the original name better, but a rose by any other. Adam Ziegler writes about the effort in an August 8, 2016, post on the Harvard Law School Library blog.