Internet Media 101: The bigger the lie, the greater the profit
Fact-driven media is a relatively new phenomenon. In a July 14, 2017, post on Medium, Tobias Rose-Stockwell writes that prior to the 1920s, newspapers were little more than outlets for whatever opinion/propaganda their owners chose to spout. As Rose-Stockwell states, "many had an extremely loose relationship with the facts."
In particular, the "systematic manipulation of the truth" in the period before and during World War I was recognized as a "powerful weapon" that could "destabilize whole nations." The journalism most of us grew up with has an editorial component (Is it important?) and a reporting component (Is it well sourced?) For decades, news delivered via print, radio, and television had a firm grasp on our attention.
Then along came the internet. In particular, along came social media, which Rose-Stockwell describes as "one of the most successful attention-capturing machines ever created." The 12,000-megaton gorillas dominating social media are the Google and Facebook news feeds, which are controlled by "robot editors" that take psychological manipulation to levels unprecedented in human history.
How Facebook remaps your thinking processes
You may think those stories appear in your Facebook news feed at random, but nothing could be further from the truth. Facebook carefully watches and records everything you do on the site, just as Google and other internet media do. The resulting map of your brain provides insight into your "patterns of engagement." The content of your news feed is determined by this map, not by whether the material you're shown is factual, unbiased, or newsworthy.
Items that are "designed to propagate fear, mistrust, or outrage" are treated the same as those that have received the traditional journalistic vetting by human editors for importance, and by human reporters for accuracy and completeness. The sole criterion for Facebook and the army of advertisers that generate billions of dollars in revenue for the company is clicks. And they are masters at pushing your buttons so you feel compelled to respond. Nowadays, it's all about emotion.
The goal of internet media such as Facebook is "affective engagement," which is defined as "an emotional reaction to content based on flashes of positive or negative feeling." Tech ethicist Tristan Harris refers to this trend as "a race to the bottom of the brain stem." The content matters much less than the emotion-creating, attention-grabbing headline. In fact, as much or more time can be spent on "packaging" a news-feed item as is spent on creating it.
The first problem is that the item's headline often has nothing to do with the item's content. The second problem is that nearly all people who are served up the item read only the headline. So you combine the goal of triggering an emotional response with the fact that the thing doing the triggering is a lie, and you get people responding emotionally to lies. Then they share the lie, often with a comment expressing their outrage, and the flames of righteous indignation over something that never happened are thoroughly fanned.
The bigger the lie, the faster it spreads, and the harder it is to quash
Just the other day, I stumbled across a Fox News broadcaster stating that the crime rate is soaring and has never been higher. This is a blatant lie. As shown in a recent Pew Research Center/Gallup poll, there have been double-digit decreases in violent and property crime in the U.S. since 2008, yet prior to the November 2016 national election, 57 percent of people who had voted or planned to vote believed crime had gotten worse since 2008. Not surprisingly, 78 percent of voters who supported Trump believed the crime rate had increased.
Another example of the triumph of fear-mongering in the media is the media's fixation on terrorism. Terrorist attacks are covered to an extreme by media of all types, yet homicides resulting from terrorism are a small fraction of all homicides in the U.S. The sad irony is that the terrorists themselves benefit from this overblown coverage -- almost as much as the media, which generates tremendous revenue by making people afraid for no rational reason.
The Lie on Feet that is our President has benefitted more from the media's affective engagement than anyone else. According to analytics firm Mediaquant, between October 2015 and November 2016, Trump received $5.6 billion worth of free media time, which was three times as much media coverage as his nearest rival, Hillary Clinton.
The media has devolved into "propaganda for profit," according to Rose-Stockwell. Journalism once served as the principal weapon against propaganda, but it has fallen victim to the Facebooks of the world that profit from lies. Social media has learned how to "reliably hijack the human brain for attention," according to Rose-Stockwell. He concludes that we'll never be able to tackle the serious problems that face humanity if we can't do so with a "clear head."
The battle for our attention shows no signs of abating. We have to be extra cautious and judicious in deciding who we give our valuable attention to. The secret, proprietary algorithms used by Google, Facebook, Apple, Snapchat, Twitter, and "literally every major media provider" are tinkering with our thought processes and emotions -- without us being aware of it.
SESTA puts internet speech in the cross-hairs: The Communications Decency Act of 1996, 47 U.S.C. § 230, states that entities providing internet platforms aren't liable for the content that uses the platforms. A new bill in Congress called the Stop Enabling Sex Traffickers Act would make the people behind internet platforms responsible for the content of third parties. The Electronic Frontier Foundation's Elliott Harmon writes a Call to Action in an August 2, 2017, post.
As Harmon states, sex trafficking on the internet is "a real, horrible problem." However, SESTA's provisions would weaken Section 230 protections. The added liability for the acts of third parties will prevent businesses from investing in the internet, according to Harmon. In addition, SESTA gives states the power to enact laws that censor the internet so long as the stated purpose of the laws is to target sex traffickers.
If that's not bad enough, SESTA weakens Section 230's "Good Samaritan" provision, which protects intermediaries from punishment if they agree to filter or block some types of content. The exemption extends to all illegal material that they fail to block. SESTA makes platforms liable if they are aware that sex-trafficking ads appear on their services.
Harmon claims this will lead to platforms choosing not to review any of the content that appears on their services. This includes any attempt to enforce community conduct standards. Platforms will have to choose between enforcing strict rules that infringe on users' free speech and privacy, or taking no actions whatsoever.
Facebook teams with the News Literacy Project: Facebook wants you to be "a skeptical and responsible consumer of news," according to a January 11, 2017, post on the News Literacy Project blog. As part of the Facebook Journalism Project, Facebook will offer "videos and other multimedia elements familiar to Facebook users," all 1.8 billion of them. A series of public service announcements will spread the word among Facebook denizens.
In addition to providing the NLP and similar organizations with financial and engineering support, the Facebook Journalism Project offers training to jourmalists and is working with news organizations to create "new storytelling formats." Facebook also pledges to make it easier for users to report bogus posts, and to make it more difficult for fake news purveyors to profit from their scams.
Time for all your notifications to go into permanent snooze mode: Wired's David Pierce has some advice for you: Turn off your notifications. All of them. In a July 23, 2017, post, Pierce explains that none of those notifications that set off your phone or computer are intended for your benefit. He cites a 2016 study by Deloitte reporting that people look at their phones an average of 47 times a day, and young people do so 82 times a day on average.
That's how often you're allowing an advertiser to grab your attention at a time of the advertiser's choosing. Make no mistake, nearly all the notifications are ads, though they may profess to be bearers of important tidings. Don't waste your time trying to manage your notifications, either. That just puts the app developers in charge. Pierce makes exceptions for phone calls and text messages, but only "if you must."
Sadly, you can't just flip a switch on your phone to silence all your notifications at once. Both Android and iOS require that you go into settings and deacivate alerts on an app-by-app basis. If you don't think you're ready to go cold turkey on your alerts, Pierce offers a compromise: In iOS, you can disable everything except the "Show in Notification Center" option. You won't be interrupted by sound, banner, badge, or lock screen, but the notifications will be there when you pull down the "windowshade." (Android's "Show Silently" option is similar.)
Un-anonymizing data is getting easier and easier: If you think what you do on the internet is anonymous, ask the judge whose porn preferences were discovered, or the German MP whose medication history was found in a store of "anonymized" data acquired openly and legally. In a presentation at the recent DefCon event in Las Vegas, journalist Svea Eckert and data scientist Andreas Dewes explained how they acquired the data for free and applied relatively simple methods to de-anonymize the browsing histories of millions of people. The Guardian's Alex Hern describes the presentation in an August 1, 2017, article.
The researchers created a "fake marketing company" they said had developed an algorithm that would improve marketing. They contacted hundreds of companies to ask for anonymized browsing histories, and ultimately they compiled a database of 3 billion URLs and 9 million sites visited by 3 million German users over a 30-day period. The researchers employed many different techniques to tie specific browsing histories to individuals. They claim to be able to identify people when they know only 10 of the sites they have visited.
There are so many public sources of information about people that pairing an anonymous browser history to a particular person takes only a few seconds -- and requires no hacking or investigative skills whatsoever. By the way, the source for most of the data used in this research was the free Web of Trust tool that rates links for reliability before you click them. I used to recommend Web of Trust. I don't anymore.
(The UK Information Commissioner's Office Guide to Data Protection offers an Anonymization code of practice (pdf) that describes anonymization techniques and rules relating to data disclosure.)