People tracking is about to get even creepier
How many apps do you have on your mobile phone? Dozens? How many of those apps were free? Nearly all of them?
How do you think the app developers are making so much money? They collect and sell personal information about you. Not just your name, email and IP addresses, and other obvious categories of private data. Some of them know and disclose how much money you make, whether you have children, and your political leanings. That’s just the tip of the data-sharing iceberg.
In a February 29, 2016, article on the Sophos Naked Security site, Lisa Vaas describes the results of a study (pdf) conducted by researchers at the Georgia Institute of Technology School of Computer Science. The researchers conclude that mobile apps regularly “leak” private information on the phones that is supposed to be off limits to the apps.
Mobile ad networks such as Google’s AdMob deliver ads to phones based on the owner’s interests and demographic profile. The ads pick up this information and add it to the dossiers they create for each of their users. The app vendors can guess your gender with 75 percent accuracy, your parental status with 66 percent accuracy, your age group with 54 percent accuracy, and your income, political affiliation, and marital status with more accuracy than random guesses.
At present, there’s no way for Google or other ad networks to prevent the targeted-ad data from being shared with the apps in which the ads appear. The U.S. Federal Trade Commission has been investigating privacy violations by mobile app vendors; the apps are subject to the same regulations as web sites, according to Omari Sealy of Moore & Van Allen PLLC in a February 29, 2016, article on JD Supra Business Advisor.
The FTC has found that mobile app vendors collect personal data without the required knowledge or consent of users; they fail to protect the private information adequately; and they misrepresent or fail to disclose their collection and storage practices. Sealy notes that when Facebook was sued for unauthorized collection of private data, the suit was thrown out in 2015 because the court found the plaintiffs could not tie the data collection to a “realistic economic harm or loss.”
FCC fines Verizon for playing not nice with supercookies
Unique Identifier Headers – a.k.a. “supercookies” – allow companies to track us even when we delete their tracking cookies. In a March 7, 2016, article, TechCrunch’s Anthony Ha explains how Verizon Wireless used supercookies for almost two years, from December 2012 to October 2014, without disclosing the practice to its customers. The company’s settlement with the U.S. Federal Communications Commission imposes a $1.35 million fine, which Ha points out is tantamount to a slap on the wrist when you consider Verizon reported $34.3 billion in revenue in the most recent quarter.
The settlement allows Verizon to continue to use supercookies, but it must get customers’ explicit permission to share the information with third parties, and it must allow customers to either opt in or opt out before the company can use the private information to target its own ads.
The potential for misuse of private data collected by ad networks is evident in the FTC’s action against SiteSearch and three other defendants who “knowingly” provided scammers with sensitive personal information about hundreds of thousands of individuals. Kathryn Rattigan of Robinson+Cole Data Privacy + Security Insider reports on the case in a February 29, 2016, article on JD Supra Business Advisor.
The defendants collected information from online payday loan applications, including Social Security numbers, employer names, bank account numbers, and bank routing numbers. The settlement calls for SiteSearch to pay $4.1 million and to stop selling or transferring personal information to third parties. According to the FTC, the defendants sold information from 95 percent of the applications they received for $0.50 apiece to third parties who had no “legitimate purpose” for having the information.
Some of those third-party purchasers used the information to steal money from the applicants’ bank accounts, in some cases totaling millions of dollars stolen. So a so-called legitimate business sells the name, street address, phone number, employer, and bank-account information on hundreds of thousands of individuals to third parties that have no need for the information. It’s okay because the people can’t demonstrate “realistic economic harm or loss” as a result of the practice.
How many of us are likely to stand on a street corner and offer our bank account numbers, SSNs, and other private financial information to anyone and everyone for fifty cents a pop? I don’t see too many hands.
Accounting for what personal info is being collected
The age of total tracking is just dawning. Soon private data will be collected from our cars, our homes, our workplaces – every place we go, revealing everything we do. The collectors are salivating at the prospect. They’re also not satisfied with analyzing only data – they want a deeper view into our psyches. In a March 7, 2016, article, Forbes’ Brian Solomon quotes Target CIO Mike McNamara speaking at that day’s Forbes CIO Summit:
“You have to overlay bits and bytes with experience…. There’s a ton you can tell about customers from data, but what doesn’t flow through the bits and bytes is intent or emotion, how the customer really feels about the brand.”
These types of deep-dive analyses are tantamount to mind-reading, at least from a marketer’s perspective. (I’m beginning to fear that the marketer’s perspective is the only one that matters in this corporate world of ours.) Consider the “21 scary things big data knows about you,” as listed by Forbes’ Bernard Marr in a March 8, 2016, article.
I haven’t even mentioned license-plate readers, store loyalty programs, facial recognition, and any number of other opportunities to supply our personal information, whether voluntarily or involuntarily. We can’t keep pace with the rate at which our private lives are becoming public. Knowledge is power, as the saying goes. How do we ensure that knowledge is more evenly distributed?