In the previous article I talked about some of the history of social media, and identified 14 pervasive problems of today’s online social networks:
Addiction to social media
Fear Of Missing Out (FOMO) and the negative psychological consequences
Pervasive annoying advertising
Privacy invasion to justify advertising spend and allow microtargeting
Spread of extremism through engagement maximization
Spread of propaganda through microtargeting
Funding of extremism through automated advertising
Extremists and propagandists
I’d now like to go on and consider design principles we might consider when building social networks, in order to ameliorate or eliminate these problems. Unlike the previous article, I expect this one to trigger significant disagreement.
Scripts should be blocked at server level, and HTML messages (if allowed) should be sanitized to some safe subset. It is not sufficient to rely on clients to behave safely.
One of the biggest problems of any social network is how to handle moderation in a way which can scale, without setting up any single party as content gatekeeper.
I propose that the best model for this is that the person who posts a new top level message owns the moderation responsibility for all replies to that message. In other words, if you start the thread, you own it.
To make this acceptable to users, it will be necessary to offer controls determining who can post to your threads.
The default should be that only your friends can reply to your posts.
If users choose to open up their posts to replies from strangers, the default should be that only they will see replies from strangers, until they approve the replies. (Replies from friends would still be visible to all automatically.) This is the model used with success on many blogs.
The option to make any reply from anyone visible instantly should not be supported by the system. It is unsafe, as it offers an opportunity to propagate spam and propaganda, and allows for pile-ons, trolling, and other bad behavior.
To be clear, I propose that it be technically impossible to have replies from strangers propagate to everyone without the approval of the thread owner; the restriction should be imposed at network level. Otherwise, bad actors will use custom clients to leverage the network for uncontrolled thread propagation.
Limiting reply permission and visibility within threads is a good start, but not sufficient to avoid all of the problems identified.
One of the reasons why Twitter is the top network for astroturfing is that by default, everything posted is visible worldwide, instantly. Bot accounts engage in mass posting of political propaganda across thousands of new threads.
I believe that it’s time to reconsider whether worldwide posting is a good idea at all. Perhaps the network should enforce limited visibility of top level posts in some way.
Suppose, for example, the system only showed you threads if they were started by someone you had listed as a friend. This could be enforced at network level by propagating messages according to the friend relationship graph. See Scuttlebutt / Patchwork for an example of a system that works like this.
A system propagating messages according to the friend graph is naturally resistant to spam — you have to indicate that you want messages from a person before you will receive them.
To allow discovery of new friends, a separate feed could be offered that would show public friend-of-friend posts and their associated threads.
Note that enforcement of narrowcasting is not the same as placing access controls on a post. It’s quite likely that users will want to be able to control who can see a thread in order to restrict its visibility even further, which is fine.
As well as preventing effective use of the system for spamming and propaganda, enforced narrowcasting also makes it even harder to organize pile-ons.
As well as friend network narrowcasting, it might be a good idea to consider placing an absolute limit on the number of friends.
In the 1990s, evolutionary anthropologist Robin Dunbar noticed a correlation between primate brain size and social group size. It appears that the primate brain can only maintain a stable relationship with a certain number of individuals. Beyond that number, primates tend to characterize individuals as either being “ingroup” or “outgroup”, and mentally assume outgroup homogeneity.
By extrapolating observed group sizes and brain sizes, it has been computed that for human beings, Dunbar’s Number is somewhere between 100 and 250, with 150 being a commonly used estimate. Beyond that limit is our layer of acquaintances, which tops out at 500 or so.
In other words, our social brains are simply not wired to behave socially towards more than around 150 individuals. It turns out that in studies, this limit applies to online interaction as well. This likely explains why one-on-one interactions between large groups on the Internet tend to devolve into antisocial empathyless shouting matches.
So, I propose that perhaps it’s time to start taking Dunbar’s research into account when designing social network software. Users should be limited to (say) 200 friends and 500 acquaintances.
I expect that this proposal will be controversial. It may be based on scientific research, but it flies in the face of what users expect from a social network. Remember, though, that ethical design is about giving people something that is good for them, not just whatever they say they want.
It’s time to build social networks which forget. This was once the norm; in the days of Usenet, old posts would expire after a certain number of days. Today, too many social networks default to making all your posts instantly available everywhere to anyone who requests them. This practically encourages pile-ons, and also provides detailed databases for use by stalkers, trolls, government agencies, and corporate data miners.
To discourage a lot of bad behavior, posts should expire.
Yes, technically it’s not possible to make sure that nobody who receives your posts keeps a permanent copy of all of them. However, when combined with narrowcasting, this should be much less of a problem. Most people will be able to count on their friends not keeping archives of everything to embarrass them with later.
Even if it’s not possible to prevent archiving by third parties, there’s still a useful middle ground between “guarantee deletion of old posts” and “keep all old posts forever”. The network should be built around the assumption that nodes will want to expire old content after a certain number of days. Users should also have the option of adding an additional suggested expiry date; posts would be deleted either on the expiry date, or as part of the server’s retention algorithms.
Multiple identity support
While it is useful to be able to set up lists of people for access control, this is not sufficient for privacy. People really need to be able to set up multiple profiles with separate identities which are not visibly linked.
For example, a schoolteacher will likely want a kid-friendly limited-info profile they can share with students, and a separate personal profile they use when interacting with their adult friends. They might have a third profile for a part time business they run, and a fourth for the band they play in.
Identity switching needs to be fast and convenient, and the current identity needs to be clearly visible in the user interface.
Separating out aspects of a person’s identity in this way hinders data mining, stalking and trolling. An aspect of multiple identity support should be support of pseudonymity.
Pseudonymity and name mapping
It should go without saying that social networks should not force “real names” onto people, or demand any particular format for names — see “Falsehoods programmers believe about names”.
While allowing complete anonymity does increase bad behavior, allowing pseudonyms does not appear to. (Pseudonyms have been an accepted part of serious discourse in America since the founding fathers; see the Federalist Papers, for example.)
The idea that a real name associated with an account will prevent bad behavior is very commonplace, but is sadly wrong. Real names also make the problems of government censorship, pile-ons and stalking dramatically worse.
Pseudonyms are not without problem, though. I’ve observed that pseudonyms do often make it harder to perceive the poster as an actual person — particularly when the pseudonym is changed regularly, or is a slogan rather than a name.
For this reason I advocate that client software should allow users to specify a display name that they want to use for a given network ID. Then, just as text messages from a given phone number are displayed with the name I’ve put in my address book, so social messages from a given account would be displayed to me with the name I’ve specified. The user-specified pseudonym would thus be a default, overridable by reader preference.
This would also help with situations where multiple people have the same or similar pseudonyms, or go by just their first name. (I know at least four Kates.)
Another crucial aspect of the design of a user-focused social network is that the order of presentation of posts should be clearly indicated, and should be under the control of the reader.
The views users are most likely to want are those presenting posts in reverse chronological order — either by order of creation, or by order of last update.
Getting back to chronological feeds and away from secret algorithms should eliminate a lot of the addictiveness, stress and frustration associated with social networks.
It would be nice to reimplement some of the other features once offered by Usenet, such as kill files (hide all posts mentioning particular keywords) and unread marks (hide all posts I’ve seen before no matter what their date). These would help people to keep control of their social network usage even further.
A major trend in software design has been “gamification”: implementing behavior similar to video games on things which are not video games, in order to incent the user.
Most of the gamification features on social networks have negative effects:
“Like” buttons encourage superficial interactions and discourage meaningful comment.
Downvote buttons as part of score-based moderation have been found to lead to worse behavior by the users whose comments are downvoted.
Trending topics lists spread misinformation and propaganda, and encourage repeated posting of the same content in order to try and “win” a place at the top of the charts, leading to use of bots.
Any user-focused social network must be federated. There is no practical way to ensure that a corporation, nonprofit or other organization cannot become abusive towards users.
Ideally the federated system should support identity migration between nodes, so that if a particular server becomes unreliable you can move to another without all your friends needing to resubscribe at your new location. This is technically difficult, however.
An additional desirable feature is multihoming. Just as multiple mail servers can serve a single e-mail address, so it should be possible to have multiple social servers hosting a single social identity, with automatic failover if one becomes unavailable. This will help with censorship resistance.
To finish (for now), here’s what I consider the bare minimum functionality for any system worth using:
Ability to delete spam/trolling/abusive replies to my postings
Ability to limit who can reply to my postings
Ability to set an expiry date for my postings
Multiple identity support
That may not seem like a very long list, but I’ve yet to find any system that can check all the boxes.