Kindness City Blog
26 Mar 2022

Social media without rage?

It is common knowledge that social media "algorithms" (or perhaps "heuristics" if you prefer) in 2022 seem to amplify rage. We, the users of these platforms, are not the customers of the companies that provide them. Instead, our attention is the product that social media companies sell to advertisers. To maximise the amount of attention we pay to the platform, the platforms are optimised for "engagement". The platform shows us things that will "get a reaction", which often means make us angry or frightened enough that we feel compelled to respond immediately. Other folks have written about this sort of effect when they describe the rage economy, postjournalism, and so on.

Encouraging collaboration

What if we could build a social media system that encouraged bipartisan collaboration instead of conflict?

I think that to do this we need two things, which I think we already have:

  1. Social network graphs that tell the system which "camp" each of us is in.
  2. Heuristics for distinguishing "agree" engagement from "disagree" engagement.

With these things, I think we ought to be able to build a social media feed which amplifies posts that get agreement from multiple camps, instead of amplifying posts based on raw "engagement" numbers.

Currently twitter offers me one feed. Once this was a simple chronological feed of things my followees had tweeted or retweeted. Today, by default, it's sorted by an engagement heuristic. I would like a tab that allows me to easily switch from this view, to a "global conversation" view, and back.

In the "global conversation" view I still want to see mostly tweets, retweets, and likes from my followees – but I want them to be sorted according to the heuristic I described above. I want stuff that positively engages multiple camps to filter to the top.

My bet is that a lot of people will find this "global conversation" view to be a less toxic place than either of the current options, and will choose to spend more and more time there.

What about ad revenue?

I think that a "global conversation" view as I described above will drive more raw engagement than a pure chronological timeline. So far, I think there's no conflict with ad revenue.

I think that there are many short-term metrics which are likely to paint a "global conversation" view as driving less engagement, and hence making less ad money than a more common rage-inducing "social media algorithm". Here there is a conflict with ad revenue.

However, I think it ought to be possible to argue that you will get higher quality engagement in the global conversation view than in the rage-inducing one. Ask your advertisers if they'd rather have me associate their ads with the rage-inducing political opinions of my enemies, or with the feeling that there's hope that together we can solve our problems.

I believe ads in the "global conversation" stream ought to be significantly more valuable than ones in the "rage" view.

Is it possible?

Recall that for this to work, we need:

  1. Social network graphs that tell the system which "camp" each of us is in.
  2. Heuristics for distinguishing "agree" engagement from "disagree" engagement.

For (1), every social network has a graph of who follows whom, or who is friends with whom. These graphs have clusters, and those clusters are "camps" as I have described them. We don't need to give names or labels to the camps, and we don't need to judge if any of them are "right" or "wrong" or "good" or "bad". We have observed that current social media platforms tend to polarise issues between these camps. Our goal is to encourage posts that unify them.

For (2), we're on shakier ground. Platforms like youtube, reddit, and slashdot have "upvote" and "downvote" buttons, which might work well if we trust people to use them honestly. On platforms that don't have simple buttons for "agree" and "disagree" or "like" and "dislike", we might be able to use keyword matching or ML techniques to get an approximation. I don't know how accurate an approximation we'd need to get the effect I'm hoping for.

Can it be hacked?

It's possible that if we adjusted the feed heuristics as I've described, people might start lying about their upvotes and downvotes in order to game the system. Gangs of angry people who want to encourage conflict might mob reasonable posts with thousands of downvotes in an attempt to silence them.

People might try to construct botnets of accounts that show up as "in a given camp", and then have those bots vote contrary to the collaborative interests of that camp.

I don't know if either of these attacks are likely to be cheap and effective enough to significantly damage the system as a whole. They would almost certainly happen, and some people's posts would almost certainly end up unfairly minimised or amplified as a result. As is usually the case with these sorts of power games, it seems likely that already-marginalised posters would suffer more than majority posters.

Even so, perhaps it'd still be better than what we have right now?

Tags: tech-industry ux media politics

There's no comments mechanism in this blog (yet?), but I welcome emails and tweets. If you choose to email me, you'll have to remove the .com from the end of my email address by hand.

You can also follow this blog with RSS.

Other posts