Saturday, May 30, 2020

Early Indications May 2020: Platforms and Truth

As I write the president has recently signed an executive order that desires to change the status of Internet platforms’ responsibility for the content their members post. The regulation in question, which originated in the Communications Decency Act of 1996, has a long and fascinating history. Here’s the text in case you’re interested.

In its early years of owning YouTube, Google invoked the “safe harbor” provision of the CDA to exempt itself from responsibility for much of the content that YouTube’s users uploaded. Even though the CDA was ruled unconstitutional by the U.S. Supreme Court, the “safe harbor” concept remained in force, eventually in section 230 of the United States Code, which is the legal underpinning of the Federal Communications Commission. Section 230 does two things. First, it protects providers of Internet services from liability for the speech exercised by users of that infrastructure. Second, when those providers do choose to police some speech or behavior, it does not imply that they must police all behavior.

Tarleton Gillespie of Microsoft Research points out a crucial distinction: section 230 was meant to apply to Internet Service Providers like AOL or Comcast, but was quickly invoked by social media companies a few years later. Because Facebook and YouTube are exempted from responsibility for what their users post, except in a few extreme cases of child endangerment and terrorism, these companies have been slow to regulate other troubling behavior. Because the law in the U.S., where the companies are headquartered, is so favorable, those companies act to moderate content primarily for economic reasons rather than legal ones. Google has repeatedly failed to pay fines levied on it by the EU. For its part, eBay first tried to block sales of Nazi-related items in only France and Germany but, finding that nearly impossible, pulled such items from the site entirely (except for postage stamps and the like). The point here is that U.S. economic logic is dictating what billions of non-U.S. citizens see and don’t see.

Note that YouTube and its kin in the U.S. are (lightly) regulated by the laws related to the phone companies. YouTube is of course heavily reliant on telecommunications, but it is also a near neighbor to the movie industry (governed largely not by federal law but by self-regulation: movie ratings derive from the Motion Picture Association of America, a trade group) and in some ways to newspapers (in the U.S., under the umbrella of the First Amendment related to freedom of the press, and thus regulated by court cases). Further, Google, as a major actor in the advertising ecosystem, is subject to oversight by the Federal Trade Commission. Predictably, any entity that spans so many jurisdictions – in only one country representing a minority of its total traffic – can often escape close scrutiny by claiming exemption from any given mandate.

For the platforms, section 230 is a great gift. Facebook et al can profit from content they neither produce nor must police. Inaccuracy, whether inadvertent or aggressive and programmatic, is rampant, and profitable. Digital literacy is troublingly low – especially when sites that could be used for fact-checking, namely Google, surface results that have been cleverly promoted by peddlers of falsehoods. Anti-Semitism, anti-vaccination falsehoods, misogyny, and racial stereotypes only begin the list of search areas that have been gamed. 

The current debate will be important to watch. Since March, YouTube has been much more activist, both in removing false content in relation to the coronavirus and in generating positive, accurate videos under its own branding, using the #WithMe hashtag. At Facebook, Mark Zuckerberg allows political ads and commentary to say almost anything (official company policy notwithstanding), and internal voices concerned about the site’s practice of ideological polarization were silenced. Twitter, as we saw this week, has a long history — of allowing harassment and verifiably false statements to stand -- that it will have a hard time walking back. The Biden campaign, for its part, is on record as opposing section 230, but in the direction of requiring platforms to do more policing of content, not less, as the Trump position argues.

This state of affairs, as lamentable as it is, stands in stark contrast to the technological optimism of the computing pioneers responsible for the conceptual and technical foundations upon which the Web, and later its platforms, were built. Stewart Brand migrated from the Whole Earth Catalog’s neo-homesteading ethos at the tail end of the 1960s to early online communities and the tellingly named Electronic Frontier Foundation. Tim Berners-Lee and his co-authors in 1992 articulated the ideal of the World Wide Web:

You would have at your fingertips all you need to know about electronic publishing, high-energy physics, or for that matter, Asian culture. If you are reading this article on paper, you can only dream, but read on. Since Vannevar Bush’s article (1945) men have dreamed of extending their intelligence by making their collective knowledge available to each individual by using machines.
Only six years after Berners-Lee, however, James Katz of Rutgers (formerly at Bellcore, the Bell operating companies’ R&D shop) astutely saw the potential for the Web to pollute that stream of knowledge rather than nourish it:

The Internet and the Web allow for the quick dissemination of information, both false and true; unlike newspapers and other media outlets, there are often no quality control mechanisms on Web sites that would permit users to know what information is generally recognized fact and what is spurious.  

Katz basically called out Internet-powered fake news 22 years ago.

The rapid evolution from Berners-Lee’s extreme optimism to the many and profound downsides of ubiquitous connectivity – mental and physical health concerns, the monetization of private life via unmonitored behavioral experimentation, the hacking of democratic institutions, trolls and shitposting, swatting, aggressively nasty disregard for the views of women and ethic populations --  is a story for another time. The dilemma of a post-fact society cannot be probed here, but it is real.

At the same time, this is not for a moment to suggest the problems of accuracy in and abuses of online platforms are easy to address. Twitter’s vice president of trust and safety Del Harvey used simple statistics to drive home the scale of the moderation issue in a TED talk in 2014. “Given the scale Twitter is at, a one-in-a million chance happens 500 times a day,” he stated. That changes operating assumptions. “For us, edge cases, those rare situations that are unlikely to occur, are more like norms.” If you assume 99.999% of tweets pose no threat whatsoever, “that tiny percentage of tweets remaining works out to roughly 150,000 per month. The sheer scale of what we’re dealing with makes for a challenge.” Bear in mind that Harvey was speaking in 2014, when YouTube uploads were probably half of 2019 levels, and that YouTube sees three times the monthly active users Twitter has, and the scale of the moderation problem — that historically does NOT include fact-checking — gets tangibly staggering. 

This moderation problem is made harder yet by the platforms’ need to maintain the illusion of civility and neighborliness. Few platforms publicize their moderation teams; in 2013 an NPR reporter was denied access to moderators at both Google and Microsoft, though a spokeswoman at the latter said moderation was “a yucky job.” Much of the work is outsourced. One moderator told a reporter in the fall of 2019 he was paid $18.50 an hour (about $37,000 a year) to watch “VE” – violent extremism, primarily in Arabic – videos all day as an employee of Accenture, the tech services firm Google contracts with for some of its moderation. In October 2019 Google reported it had removed 160,000 pieces of violent extremism material from various of the company’s properties. That’s about 450 per day, every day.

Removing or changing the section 230 “safe harbor” concept could force the big U.S. platforms to alter their business models, YouTube potentially less radically than Facebook. The concept is also well established after nearly 25 years of court decisions, so the status of case law relative to executive order will need to be decided. In any event, a law intended to protect Internet Service Providers that evolved into the bedrock on which large-scale digital platform companies were built has suddenly and loudly been called into question.