WWW: What Went Wrong?

How and why social media came to be awful

The Internet sucks.

This is something that pretty much everyone seems to agree on to some extent. Somewhere along the way, things went seriously awry. The result has been a series of news stories of the Internet’s impact on the real world that are hard to distinguish from the dystopian nightmares of science fiction.v

Social media has been a big part of many of these nightmares. WhatsApp triggering lynchings in India. Eating disorder rates skyrocketing as Instagram struggles to eliminate pro-anorexia postings. Twitter struggling to get rid of neo-Nazis. Facebook fomenting genocide in Myanmar. Live video-streamed suicides.

This is my attempt to tell a story about how we got here. In a followup article, I’ll talk about some ideas for how we might improve matters.

The men who ruined the Internet

You might think that it would be hard to find a starting point for all these problems, but I think there’s a good case to be made that almost all of it comes down to the work of three men: G. M. O’Connell, Joe McCambley and Andrew Anker.

In 1994, Anker was CEO of Hotwired, the web spinoff of WIRED magazine. They decided to offer AT&T the chance to run an advertisement on their web site. O’Connell and McCambley of Modem Media devised the ad. It looked like this:

First banner ad

Notice that the ad doesn’t mention AT&T at all. It’s pure clickbait, even though that term wouldn’t be invented until years later.

The invention of banner ads has been called the Internet’s original sin. Even the inventors of that first banner ad now lament what advertising has done to the Internet. McCambley’s kids apparently tell him it’s like he invented smallpox.

Advertising has raced to the bottom and become universally hated — yet it has remained the mainstay of Internet site funding. While advertising’s effect on magazine web sites was bad, rendering some of them barely readable, its effects on social media turned out to be far worse.

The invention of click tracking

Once Internet ads were invented, it didn’t take long for them to get worse. The web site Tripod started carrying banner ads, until one day a car manufacturer was horrified to learn that their ad was appearing on a page about the joys of anal sex.

One of the site’s developers had a bright idea to sidestep the issue: He made the page pop up a separate page with the ad on, so that the car ad wouldn’t be seen to be associated with the butt stuff. And so the pop-up ad was accidentally invented.

Before long the Internet was infested with pop-up ads. As the ads became more and more annoying, the clickthrough rates got lower and lower. That first banner ad had a click-through rate of 44%, but by 1998 a more typical rate was 3%. (Today, it’s more like 0.3%.)

Advertisers were unhappy — they felt like they were paying a lot, for not many actual leads. So in 1998, Jeffrey Brewer of Goto.com introduced the idea of a pay-per-click (PPC) search engine at a TED conference in California, the pay-per-click model having been invented by Goto’s founder Bill Gross. Before long, there were multiple advertising networks offering pay-per-click Internet advertising placement.

There was one notable exception in the rush to Pay-Per-Click: Google. Their advertising network started out as CPM, Cost Per Mille, or cost for a thousand ad impressions.

I have no proof, but I suspect that their reluctance to embrace the PPC model was because of their early guiding principle of Don’t Be Evil. The problem with PPC is that you need to track whether users actually click on the ads, which means suddenly you have to carry out surveillance of which sites they are visiting and log that information in a database.

Eventually, though, Google gave in, and in 2002 they introduced Pay Per Click.

Vulture capitalists swoop in

Advertising had set the stage for massive surveillance on the Internet, but there was another important force that began to drive its development: venture capitalists (VCs).

Even today, VCs seem to favor growth over pretty much anything else. Look at Uber, for example: the company has been posting massive losses every year since it was founded, but it still gets valued at tens of billions of dollars. (The hope, apparently, is that it will eventually become enough of a monopoly to be able to sharply increase prices on a captive market.)

Amazon went for around 20 years without becoming a major profit-maker. That finally changed in the last few years — but mostly because of their cloud computing business, not their retail business. Their stock price had been rocketing since 2008, though, because they were clearly growing their business like crazy.

If you’re a social web site taking VC funding, the thing they will want you to show more than anything else is user growth and engagement. They will want detailed metrics. This has had a major effect on the behavior of VC-funded social networks.

Facebook and the News Feed

Facebook started out as a place where you posted short personal updates about your life, and read the updates posted by your friends. By focusing on providing that functionality as efficiently as possible, by 2009 Facebook managed to overtake MySpace to become the most visited site for the US.

But, after some privacy issues around 2010, people began to be more wary about posting personal updates.

Between 2014 and 2015, for example, personal sharing dropped by a massive 21%. Instead of the place where you caught up with friends, Facebook started to be the place where people posted news stories and cool links, and discussed them.

But Facebook needed to keep people coming back, because they needed to keep their metrics up. They tried to persuade users to post personal updates — perhaps you’ve noticed the prompts still appearing from time to time — but it didn’t work.

So, they did what they had to do to keep the numbers up: they turned your updates feed into the newsfeed, the place where you went to get news, and they introduced a new algorithm designed to maximize engagement.

Hate-clicking to oblivion

The algorithmic news feed serves several important purposes for Facebook (and other proprietary social networks such as Twitter). All of these purposes are consequences of the advertising-based VC-funded business model, and all of them are diametrically opposed to the best interests of the users of the system.

The first function of the algorithm is to make sure that the site delivers a mix of content optimized for engagement. The more a piece of content makes you click on it, share it and post comments, the better it is for the site’s bottom line. Fake news stories and extremist content are excellent for engagement. Users who agree with the post’s slant repost it or click “like”. Users who disagree feel compelled to post corrections, triggering angry discussions which keep everyone coming back. Either way, the social network wins.

The second function of the algorithm is to force people to pay if they want their updates to be seen. Even if you have subscribed to a particular company’s page, stating that you want to see updates from them, the algorithm will hide most of their updates unless they pay to reach you. So the content you want to see is, by default, deprioritized in favor of high engagement clickbait and fake news.

The third function of the algorithm is to ensure that the feed of updates is unpredictable. If the feed was simply chronological, you could simply read back until you saw something you had seen before, and then log out feeling confident that you hadn’t missed anything. By making the order of posts appear random and unpredictable to you, this is avoided. Even if you think you’ve read everything, hitting refresh will show you random new posts from the interval of time you thought you’d just read. Since random rewards are highly addictive, the site’s addictiveness is maximized. Even those who lack an addictive personality are likely to find that Fear Of Missing Out makes them want to keep reading, and rereading, and rereading, always feeling anxious.

The death of Google Reader

One site that was incredibly popular with its users, and which kept them active for hours every day, was Google Reader. It had a big problem, though: It wasn’t advertising friendly. Not only was it hard to advertise effectively in web feeds, it was also hard to guide users into engaging with ads. The feed reader UI didn’t allow for many of the time-tested tricks of Internet advertising, like popups and focus-stealing and ads that resize to cover most of the screen.

But worse, Google Reader put the reader in control of what was presented to them and how it was organized and sorted. It wouldn’t allow Google to engage in the kind of algorithmic newsfeed tweaking Facebook was doing.

So Google killed reader, hoping people would migrate to Google Plus instead.

Mozilla removed web feed discovery and killed its feed reader. More people moved to social media sites as a substitute.

Twitter decided that it, too, needed to maximize engagement algorithmically. And we arrived at a place where everyone (except a few cranks like me) relied on social networks to get their news, and the network was generally Facebook.

The development of surveillance capitalism

Once the social networks had developed an addictive algorithmic newsfeed and had a captive audience, they began demanding payment to deliver posts via the algorithm. But if you have tens of thousands of followers, and suddenly find that you need to pay to reach them, you’re going to demand a way to be more selective about which of them you reach. So the next inevitable step was that the commercial entities using the social networks demanded the ability to micro-target a subset of their followers, or a subset of people who might want to follow them if shown an ad.

So the personal information posted to the social network by the users started to be data-mined with ever increasing sophistication. Mobile apps started vacuuming up everything from your personal text messages to your real-time location. This microtargeting functionality proved perfect for propagandists who wanted to deliver content intended to sway the 2016 US elections and the UK Brexit referendum.

A pool with no filter

So that’s how the Internet’s original sin of advertising funding, plus the use of venture capitalist funding, led to surveillance capitalism. That wasn’t the only negative effect of Internet advertising on social platforms, though.

When the big advertising and media platforms were constructed by Internet entrepreneurs, they shared a belief that no regulation was needed — not even supervision. They planned to set up the monetization machinery, have it run fully automatically, skim a little off the top in profit, and spend their time doing something more interesting like scanning huge numbers of books or building VR worlds.

There were some initial problems with spam and scammers. As mentioned before, ads soon raced to the bottom in obnoxiousness, and we were invited to punch the monkey and fight with dozens of pop-up windows. Some simple technical blocks were put in, a few minimal review procedures were set up, and the gravy train continued its journey.

While the primitive bad actors were blocked out, the more sophisticated and malignant ones kept going. It turned out that hatred, racism and fearmongering were really easy to monetize — as long the bandwidth and hosting were free, there was advertising revenue, and nobody paid attention.

It took years to get our new media overlords to take a look at what they were funding. As long as they were making money, they didn’t care. But eventually, advertisers realized what their brands were being associated with, and realized that the platforms were perfectly capable of stopping it. They pushed back on Google and Facebook, who suddenly decided that offering a platform to conspiracy theorists and neo-Nazis didn’t work from a cost-benefit point of view.

Unfortunately, it was too late. When people like Alex Jones started to lose their free hosting and advertising revenue, their response was to double down on the crazy to increase earnings from their core audience.

Other problems of social networks

So far in the story I’ve mentioned the following problems:

  • Addiction to social media

  • Fear Of Missing Out (FOMO) and the negative psychological consequences

  • Pervasive annoying advertising

  • Privacy invasion to justify advertising spend and allow microtargeting

  • Spread of extremism through engagement maximization

  • Spread of propaganda through microtargeting

  • Funding of extremism through automated advertising

All of these problems developed as the eventual result of a sequence of choices made by the owners of our proprietary social networks. The choices were driven by venture capital requirements, and by the nature of online advertising.

My belief is that any social network funded by advertising and/or VC funding will inevitably end up making the same decisions, and therefore end up with the same problems. It’s the business model that’s the root problem.

Now I want to go on to look at some additional problems that are pervasive on social media in 2019, and examine why those problems exist.

Pile-ons

A pile-on occurs when social media is used to publicize something about someone, in the hope that others will “pile on” and harass the person. An early example was a 2015 joke Twitter post by Justine Sacco, intended for her small circle of 170 followers. It blew up on social media, becoming the #1 trending topic. Eventually Sacco lost her job because of it.

This phenomenon has now become a tool of choice for bad actors, who will mine a user’s old social media postings for something which can be taken out of context and publicized in order to ruin them. Pile-ons are often organized on message boards like 4chan or Reddit.

Independent social networks can be as susceptible to pile-ons as the big commercial sites. Recently, Wil Wheaton was driven off of Mastodon by a coordinated pile-on which overwhelmed the ability of his server maintainer to deal with the traffic.

Stalkers

If you’re a man, you might not have had a problem with being stalked via social networks. Chances are, though, you know at least one woman who has gone through the experience.

The big social networks are built to allow advertisers to follow you around the Internet, push messages at you, and find out everything about your private life. It’s not surprising, therefore, that stalkers have used the same tools to harass their victims. Social networks have recorded information like users’ physical location and the content of their text messages for advertising purposes,

All too often, social networks have introduced new tools without thinking through how they might be abused. For example, an ongoing issue with Google Drive is that there’s no way to remove something shared to you by someone you no longer want to hear from.

Individual stalking is bad, but when stalking is crowdsourced as part of a pile-on it can be devastating.

Trolls

It’s easy to think that trolls aren’t a major problem, but for some people they make Internet social media pretty unbearable. A truly dedicated troll won’t be stopped by routine blocking — they will make sockpuppet accounts and throwaways to troll with.

The major proprietary social networks have put barriers in place to slow creation of new accounts. This has somewhat limited the scope of trolling. Many of the new federated social networks lack any facilities for dealing with the problem, as they have no way to prevent new users from commenting on your posts by default.

Bots

Bots are an issue for three main reasons.

The first is that they are often used for astroturfing, inflating follower counts and gaming the system to make it look as if people are interested in particular political talking points.

The second issue is that bots can be used to troll or harass human users in an automated fashion; for example, Jon Ronson has talked about his battle against a spambot which impersonated him.

The third issue is that bots can put significant load on servers. For a federated system where we hope or expect servers to be run by individuals, this could be a major problem.

Government censorship

A number of countries have enacted laws censoring social media.

China has over 4,000 official government censors tasked with filtering Internet use.

Turkey has banned social media on multiple occasions. This has happened even though Turkey is attempting to become a member of the EU and supposedly subject to the European Convention on Human Rights — they “suspended” their compliance with the law.

The US has passed FOSTA and SESTA. These bills make web site owners criminally liable if a court finds that they have assisted, supported or facilitated sex trafficking. In addition, the bill applies retroactively. Many web sites used by sex workers have closed down out of fear that they might be found liable for trafficking.

The EU has attempted to pass legislation requiring that all ISPs have copyright infringement detection software. The software would detect the upload of material already uploaded and flagged as owned, and prohibit the re-uploading. Currently it seems likely this would work about as well as Tumblr’s new 2017 porn filter, but given how quickly machine vision technology is improving we should probably assume that it will eventually be workable, and plan accordingly.

Corporate censorship

Censorship is routine on all of the major corporate social networks.

Facebook has a large set of documents in which they have attempted to codify a set of universal content rules for the entire world. One upshot of this is that Facebook bans nudity, including buttocks and nipples, even if painted or drawn rather than photographic. There’s also a prohibition on expressing support for violent resistance against an occupying power, because Facebook has stated that it doesn’t want to have to decide who’s a terrorist and who’s a freedom fighter.

Twitter has a lot less censorship, but many people have been given temporary bans for swearing at famous people who have blue “verified” checkmarks.

Problem list so far

So far I’ve identified 14 pervasive problems of today’s online social networks:

  • Addiction to social media

  • Fear Of Missing Out (FOMO) and the negative psychological consequences

  • Pervasive annoying advertising

  • Privacy invasion to justify advertising spend and allow microtargeting

  • Spread of extremism through engagement maximization

  • Spread of propaganda through microtargeting

  • Funding of extremism through automated advertising

  • Extremists and propagandists

  • Pile-ons

  • Stalkers

  • Trolls

  • Bots

  • Government censorship

  • Corporate censorship

In the next article, I plan to consider some design principles I feel we should consider when building social networks, in order to ameliorate or eliminate many of these problems.