So here’s the problem with misinformation on social networks, especially the big ones: the downside to getting caught creating misinformation is small, and the downside to spreading it is even smaller.
Basically, in the name of low friction engagement, it’s incredibly easy to join in with the promotion of untrue, abusive, and dangerous content.
Even people with the best of intentions have retweeted or liked posts which in some way confirm a narrative that they’re interested in, even if it turns out that the content was fake, threatening, or something else insidious, like a racist dogwhistle.
In the best cases, the person who did the retweeting apologizes; in the worst cases, they take the first-amendment- or I-didn’t-know-defense and make a big stink about snowflakes or some such.
In the latter case, regardless of the original message, discord has been sown. But even in the better case, the damage has been done. The false/hateful message has been spread, and even endorsed, and the retraction story is lost in the public consciousness.
What if there were some sort of chain of responsibility? What if something that, as a first-strike, would cause the originator to lose their access for a week would cause retweeters to lose their audiences for three days, and people who endorse with a “like” to lose their audience for a day?
What if companies like FB and Twitter were required to fact-check content that received enough endorsement, and people who regularly posted clean content got significantly more endorsement clout, and people who regularly retweeted or liked clean stuff got relatively less endorsement clout?
And if, in addition to the potential of a time-out, one of your posts or endorsements turned out to be false or hateful, you were likely to lose some or all of that endorsement clout?
Of course, any system of rules can be gamed, but if we don’t at least try, then big social networks are just going to continue to be virtual dumpster fires.
I think this is possible. Who’s with me?