Subscribe:   RSS icon   twitter icon

Twitter Shadowbanning: Real or Imaginary?

Lee Phillips
January 10th, 2017

Note added April 15th, 2017: Scott Adams now says, in a comment about his own belief that there is evidence that he is shadowbanned, “anecdotal evidence isn’t real evidence because it can look identical to confirmation bias.” You don’t say?

Note added January 11th, 12:08pm: Since Scott Adams has reacted, rather petulantly, to the below, and since his fans are behaving normally (trying to break into my account, etc.), here is a summary for those without the time to read a few paragraphs: (1) shadowbanning might be real; (2) most suspected cases of shadowbanning are probably imaginary; (3) Mr. Adams is sure that he was shadowbanned, but offers no evidence; (4) I checked, during the supposed ban, and it appeared to be not happening; (5) It’s possible, nevertheless, that Twitter was suppressing Mr. Adams’ tweets somehow, but without real evidence, there’s no good reason for thinking this; (6) I like Scott Adams’ work and writings, even if he’s become somewhat of a tiresome illustration of the Dunning-Kruger effect lately.

“I was shadowbanned on Twitter!”

Once the complaint of a few far-right crackpots, this is now the refrain of a persistent chorus, one that seems to be growing in volume.

It recently gained some dubious credibility when Scott Adams, the cartoonist responsible for Dilbert, joined the party. First, he was skeptical that shadowbanning was even a thing, while his readers, he said, alerted him that he was an intermittent victim. A few days later, he was confidently claiming that Twitter was indeed shadowbanning him, and even claimed that this was “well documented,” without displaying or linking to any of this documentation.

Shadowbanning, whether real or imaginary, comes in several flavors. It is more subtle than just plain banning, and is supposed to blunt the influence of those placed in the shadow. The most severe version would be to make the subject’s tweets simply fail to appear in anyone’s timeline, except the subject’s own. This way, the subject of the shadowbanning keeps on tweeting, blissfully unaware that he or she is shouting into the void. This version is where shadowbanning gets its name: it is a venerable way to deal with trolls and other abusers of comment boxes on forums and websites. If the troll isn’t actually banned outright, he won’t have any reason to create a new account. The bad netizen will only discover the shadowban (or “hell ban,” as it’s also called) if she or he notices that no one’s taken the bait for a while, and visits the page incognito, to discover an infuriatingly peaceful conversation proceeding without his or her input.

Even the true believers rarely accuse the Twitter Gods of this level of suppression, at least in recent times. More common is the accusation that Twitter is excluding the victim’s tweets from search results, dropping some tweets from the timelines of followers, or some combination. The idea is that Twitter’s draconian Liberal ownership is conspiring to reduce the overall influence of users who either are, or are perceived to be, conservative, alt-right, Trump supporters, etc. This is the Dilbert Man’s theory: that when his vacillating and ambiguous articles and tweets became outright endorsements of the Republican candidate, the Twitter ban-hammer came down. This theory requires one to believe, against all odds, that Twitter management: (1) reads his stuff; (2) can figure out what he’s trying to say; and (3) cares.

It’s easy to check whether an account is subject to all but the most subtle forms of theorized shadowbanning. Open a private browser tab and navigate to Twitter.com. Find the user’s timeline. If you can see it (if not, there is some more serious ban action going on), look through the user’s tweets for an unusual string. This would be a sequence of characters that occurs with low frequency; i.e., not “of the” but something like a misspelling or an unusual sequence of words. Copy that string and paste it into the Twitter search box. Put it within double quotation marks, and do not include the user’s handle: we don’t want to make this too easy. If the search results contain the tweets from whence you extracted the test string, the account is not shadowbanned in any of the usual purported ways. If you try this a few times and the results never appear, the account may be shadowbanned.

I have tried this experiment on a couple dozen accounts whose owners were absolutely sure that they were currently shadowbanned, and in each case the supposedly banned user’s tweets popped up at the top of the search results. I tried it on many of the accounts listed on a website that supposedly tracks shadowbanned accounts, that is frequently linked to by alt-right paranoid types: not a single one was shadowbanned. To date I have not found a single shadowbanned account. I probably don’t need to point out that Scott Adams’ account is definitely not shadowbanned (sorry, Mr. Adams).

When I confront users who fervently believe in their shadowban-idness with the simple facts, they either say “Oh, good, I’m not banned any more,” (no matter how recent the claim of shadowbanning) or simply direct a stream of abuse in my general direction.

At this point it’s easy to suspect that there’s never a rational reason to believe in one’s shadowbanning; that everyone making such a claim is paranoid, or just dumb. While one of these explanations certainly fits the majority of cases, someone who doesn’t understand the way Twitter works, behind the scenes, may at times make observations that would provide a rational reason to at least suspect shadowbanning.

First, Twitter never claims that your timeline is simply a transcript of tweets from accounts that you follow, as they are generated. They are quite clear that their algorithms attempt to present a more “relevant” flow of information, in some way that they don’t specify. You can opt out of some of this, but not all of it. So you may know that someone you follow has just spewed out a tweet, and notice that it did not appear in your timeline, at least immediately. This does not necessarily mean that your friend is shadowbanned. Twitter also points out that they very definitely sculpt search results to make them more useful, again with mysterious and proprietary algorithms.

In addition, the back end of Twitter depends on something called an “eventually consistent” database. This is an information storage and retrieval strategy that is used by services where quick response is important, but where data need not be perfectly consistent and complete at all times. This would not be appropriate, for example, for your bank: when you log in and transfer funds from your savings to your checking account, clicking the “transfer” button must subtract the amount from savings and add it to checking in a way that appears simultaneous. If something goes wrong, the bank must abort the transaction; it can never subtract your money from your savings account and, maybe some time later, add it to your checking account. This kind of reliable behavior is so important for a bank that it will sacrifice speed, or take the risk of failing to complete the transaction, rather than leave the customer’s accounts in an inconsistent state.

Twitter’s approach prioritizes speed over perfect consistency. The data resides in storage facilities spread over the world, and, at any moment, the Twitter universe represented by the data in different locations may reflect a somewhat different reality. This data eventually becomes consistent, and it is copied back and forth; but, in the mean time, the tweeting never stops. Twitter handles several hundred million tweets per day, rising to almost a billion during times of maximum tweetage. The bottom line is that users whose information happens to be streaming from different facilities may see different data: different tweet-streams and different search results.

Of course it is still possible that Twitter is manipulating its data for political purposes. However, they have a strong disincentive for doing so. The law in the U.S., the E.U., and other regions provides some immunity from liability for services that deal in user-generated content. Without these protections, services like Twitter would not survive for very long, as they would be co-defendants in every case of defamation or copyright violation brought by anyone claiming harm caused by the contents uploaded by a user. These protections come with some conditions, naturally. They are still being ironed out, are subject to some interpretation, and vary by region, but the general thrust is that the more editorial control a service exerts over its content, the farther it becomes from a mere carrier of data created by its users, the greater the chance that it will be held responsible for the content of that data. Shadowbanning and similar shenanigans would be a perfect example of editorial interference in content; it would be risky behavior – something that Twitter’s lawyers would probably advise against.

The opaqueness of Twitter’s algorithms, and the natural results of its data storage technology, may lead to behavior that a user, primed to expect the dreaded shadowban (and, perhaps, eager to see it as evidence of his or her importance), will be sure is clear evidence of just that. Most likely, it’s just the Twitter machinery humming along in its usual manner. Either way, try not to get too excited. This is not a free speech issue. Twitter is not the government, and there is no plot to keep your 23 followers from seeing how brilliantly you retweeted the latest Hillary meme.


Share with Facebook Share with Twitter Share with Reddit Share with StumbleUpon Share with Digg Share with Slashdot
▶ Comment
lee-phillips.org is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com.
Quotilizer ... loading ...

Tenuously related:

The best tool for CVs with publication lists.