Elon, don't break it to fix it
Three ideas that can help preserve free speech on X while also addressing toxic users
When he bought Twitter in 2022 Elon Musk became a free speech titan. His decisions to disclose the censorship efforts
of the company under his predecessors and its collusion with government bureaucrats like FBI Special Agent Elvis Chan through journalists like Matt Taibbi and Michael Schellenberger were essential steps that reversed the trend of Big Tech stifling free expression on the internet. I think there’s definitely a lot to praise about Musk’s work at Tesla, Space X and Starlink, and when I was an engineering co-op ten years ago he was the most admired CEO among our cohort.
That’s why it’s important to be truthful today by saying that what Musk is doing by demonetising and removing the verification badges of certain X accounts seemingly based on their viewpoint is a massive step back for freedom of speech. It is true that X is now a totally private platform, more so than it was under Jack Dorsey and Parag Agarwal, because Musk bought it lock stock and barrel. But unlike them, he made a personal commitment to end the censorship on the platform. In December 2023 he allowed X users to decide whether Alex Jones would be reinstated, and then abided by it. One year later, in the midst of a bitter dispute over the H1B visa issue he has taken the opposite approach by suspending the verification badges and attendant privileges of a number of anti-H1B commentators.
While I straddle the fence on this issue, some of the suspended creators are people I respect such as Owen Shroyer and Gavin Wax. On the other hand there were many like Sulaiman Ahmed and Stew Peters that are absolute lowlifes. It doesn’t matter, because I support the free speech of both regardless of whether I like them or not. The cancelled verifications is a high handed way to deal with an issue that has simpler solutions, three of which I’ll put here.
The Twitter “dunce cap” - Rather than remove badges that these users paid for, if they’re behaving badly there should be disincentives that force them to course correct. One idea is a dunce cap badge for sharing information that proves to be false. For non-paying users the dunce cap would simply be applied at the first tier, however for paying subscribers there should be tiers that include financial penalties for sharing stories that turn out to be false, like the one below from “Syrian girl” who claimed on Dec. 5 that pro-Assad paratroopers had parachuted into the city of Hama and were trapping rebels in a cauldron. This of course did not happen and the city fell to Islamist rebels the same day she wrote the tweet. It is important that this feature include multiple review processes to prevent it from being abused.
Self-constructed filters - If there is to be censorship on X, it should be by the users in the form of choosing for themselves what they do or don’t want to see. Apparently the site already has the feature of creating customized word filters. Musk should emphasize this rather than embrace any censorship of language or opinions.
Affinity groups - Just like there are different lanes in the supermarket and different tables in a café, X should strive to create interest areas or groups where users can go to have topical conversations. These can have their own internal rules and moderation which would allow members to decide who does or does not warrant membership. My experience with Facebook groups was that often one person would insufferably domineer over a group and enforce the rules in a self serving way. However, there could be a way for people to post an X with a designated focus theme. If someone would then respond with off topic content, it would then be an easier call to remove it from the thread.
This doesn’t mean that X or other platforms should turn a blind eye to the real dangers some of their users pose to the rest of society. Recently in Beverly, Massachusetts a so-called “groyper” was arrested for holding several illegal weapons and making threats against Jewish religious institutions. Permitting free speech gives the authorities more time to focus on threats, intimidation, and people with real violent intent which has always been the real problem, not “hate speech” or “disinformation” which are regrettable but natural parts of the free speech ecosystem.