Twitter Shadowbanning: Real or Imaginary?
Lee Phillips
January 10th, 2017
Note added April 15th, 2017: Scott Adams now says, in a comment about his own belief that there is evidence that he is shadowbanned, “anecdotal evidence isn’t real evidence because it can look identical to confirmation bias.” You don’t say?

Note added January 11th, 12:08pm: Since Scott Adams has reacted, rather petulantly, to the below, and since his fans are behaving normally (trying to break into my account, etc.), here is a summary for those without the time to read a few paragraphs: (1) shadowbanning might be real; (2) most suspected cases of shadowbanning are probably imaginary; (3) Mr. Adams is sure that he was shadowbanned, but offers no evidence; (4) I checked, during the supposed ban, and it appeared to be not happening; (5) It’s possible, nevertheless, that Twitter was suppressing Mr. Adams’ tweets somehow, but without real evidence, there’s no good reason for thinking this; (6) I like Scott Adams’ work and writings, even if he’s become somewhat of a tiresome illustration of the Dunning-Kruger effect lately.

“I was shadowbanned on Twitter!”

Once the complaint of a few far-right crackpots, this is now the refrain of a persistent chorus, one that seems to be growing in volume.

It recently gained some dubious credibility when Scott Adams, the cartoonist responsible for Dilbert, joined the party. First, he was skeptical that shadowbanning was even a thing, while his readers, he said, alerted him that he was an intermittent victim. A few days later, he was confidently claiming that Twitter was indeed shadowbanning him, and even claimed that this was “well documented,” without displaying or linking to any of this documentation.

Shadowbanning, whether real or imaginary, comes in several flavors. It is more subtle than just plain banning, and is supposed to blunt the influence of those placed in the shadow. The most severe version would be to make the subject’s tweets simply fail to appear in anyone’s timeline, except the subject’s own. This way, the subject of the shadowbanning keeps on tweeting, blissfully unaware that he or she is shouting into the void. This version is where shadowbanning gets its name: it is a venerable way to deal with trolls and other abusers of comment boxes on forums and websites. If the troll isn’t actually banned outright, he won’t have any reason to create a new account. The bad netizen will only discover the shadowban (or “hell ban,” as it’s also called) if she or he notices that no one’s taken the bait for a while, and visits the page incognito, to discover an infuriatingly peaceful conversation proceeding without his or her input.

Even the true believers rarely accuse the Twitter Gods of this level of suppression, at least in recent times. More common is the accusation that Twitter is excluding the victim’s tweets from search results, dropping some tweets from the timelines of followers, or some combination. The idea is that Twitter’s draconian Liberal ownership is conspiring to reduce the overall influence of users who either are, or are perceived to be, conservative, alt-right, Trump supporters, etc. This is the Dilbert Man’s theory: that when his vacillating and ambiguous articles and tweets became outright endorsements of the Republican candidate, the Twitter ban-hammer came down. This theory requires one to believe, against all odds, that Twitter management: (1) reads his stuff; (2) can figure out what he’s trying to say; and (3) cares.

It’s easy to check whether an account is subject to all but the most subtle forms of theorized shadowbanning. Open a private browser tab and navigate to Twitter.com. Find the user’s timeline. If you can see it (if not, there is some more serious ban action going on), look through the user’s tweets for an unusual string. This would be a sequence of characters that occurs with low frequency; i.e., not “of the” but something like a misspelling or an unusual sequence of words. Copy that string and paste it into the Twitter search box. Put it within double quotation marks, and do not include the user’s handle: we don’t want to make this too easy. If the search results contain the tweets from whence you extracted the test string, the account is not shadowbanned in any of the usual purported ways. If you try this a few times and the results never appear, the account may be shadowbanned.

I have tried this experiment on a couple dozen accounts whose owners were absolutely sure that they were currently shadowbanned, and in each case the supposedly banned user’s tweets popped up at the top of the search results. I tried it on many of the accounts listed on a website that supposedly tracks shadowbanned accounts, that is frequently linked to by alt-right paranoid types: not a single one was shadowbanned. To date I have not found a single shadowbanned account. I probably don’t need to point out that Scott Adams’ account is definitely not shadowbanned (sorry, Mr. Adams).

When I confront users who fervently believe in their shadowban-idness with the simple facts, they either say “Oh, good, I’m not banned any more,” (no matter how recent the claim of shadowbanning) or simply direct a stream of abuse in my general direction.

At this point it’s easy to suspect that there’s never a rational reason to believe in one’s shadowbanning; that everyone making such a claim is paranoid, or just dumb. While one of these explanations certainly fits the majority of cases, someone who doesn’t understand the way Twitter works, behind the scenes, may at times make observations that would provide a rational reason to at least suspect shadowbanning.

First, Twitter never claims that your timeline is simply a transcript of tweets from accounts that you follow, as they are generated. They are quite clear that their algorithms attempt to present a more “relevant” flow of information, in some way that they don’t specify. You can opt out of some of this, but not all of it. So you may know that someone you follow has just spewed out a tweet, and notice that it did not appear in your timeline, at least immediately. This does not necessarily mean that your friend is shadowbanned. Twitter also points out that they very definitely sculpt search results to make them more useful, again with mysterious and proprietary algorithms.

In addition, the back end of Twitter depends on something called an “eventually consistent” database. This is an information storage and retrieval strategy that is used by services where quick response is important, but where data need not be perfectly consistent and complete at all times. This would not be appropriate, for example, for your bank: when you log in and transfer funds from your savings to your checking account, clicking the “transfer” button must subtract the amount from savings and add it to checking in a way that appears simultaneous. If something goes wrong, the bank must abort the transaction; it can never subtract your money from your savings account and, maybe some time later, add it to your checking account. This kind of reliable behavior is so important for a bank that it will sacrifice speed, or take the risk of failing to complete the transaction, rather than leave the customer’s accounts in an inconsistent state.

Twitter’s approach prioritizes speed over perfect consistency. The data resides in storage facilities spread over the world, and, at any moment, the Twitter universe represented by the data in different locations may reflect a somewhat different reality. This data eventually becomes consistent, and it is copied back and forth; but, in the mean time, the tweeting never stops. Twitter handles several hundred million tweets per day, rising to almost a billion during times of maximum tweetage. The bottom line is that users whose information happens to be streaming from different facilities may see different data: different tweet-streams and different search results.

Of course it is still possible that Twitter is manipulating its data for political purposes. However, they have a strong disincentive for doing so. The law in the U.S., the E.U., and other regions provides some immunity from liability for services that deal in user-generated content. Without these protections, services like Twitter would not survive for very long, as they would be co-defendants in every case of defamation or copyright violation brought by anyone claiming harm caused by the contents uploaded by a user. These protections come with some conditions, naturally. They are still being ironed out, are subject to some interpretation, and vary by region, but the general thrust is that the more editorial control a service exerts over its content, the farther it becomes from a mere carrier of data created by its users, the greater the chance that it will be held responsible for the content of that data. Shadowbanning and similar shenanigans would be a perfect example of editorial interference in content; it would be risky behavior – something that Twitter’s lawyers would probably advise against.

The opaqueness of Twitter’s algorithms, and the natural results of its data storage technology, may lead to behavior that a user, primed to expect the dreaded shadowban (and, perhaps, eager to see it as evidence of his or her importance), will be sure is clear evidence of just that. Most likely, it’s just the Twitter machinery humming along in its usual manner. Either way, try not to get too excited. This is not a free speech issue. Twitter is not the government, and there is no plot to keep your 23 followers from seeing how brilliantly you retweeted the latest Hillary meme.


Comments are handled through email. Please send mail to _tsb__comment@lee-phillips.org if you would like me to include it here. I will never expose your email address. Let me know if you want me to hide your name, as well.

The below is typical of the comments I’ve been getting on this article. I’ve not wasted space with them here, but I thought I should include one so my readers get the idea.

This commenter berates me because he knows that shadowbanning is real, because he believes it happened to him. He thinks, therefore, that I should delete the article. He also accuses me of receiving payment, from some outfit that I’ve never heard of, to destroy freedom.

This is all in response to an article that explicitly states that, for all I know, shadowbanning may be real and could be happening to people all the time.

Finally, for the record, I like free speech and do not want to destroy freedom. Also, I’m not a shill.

Date: Mon, 27 Mar 2017 10:05:26

From: Dalamar

Subject: You seriously believe the crap you write?

You should delete that article. Shadow bans are real. I had to make a new account because my tweets replies and especially hashtags were being hidden unless you explicitly visited my profile or the tweet was direct. All I did was point out Hillary's reputation and click like on a bunch of right-leaning posts.

I made a new account, and now people can actually see my posts. I can search for my posts while logged out of the new account, and now they actually appear.

Then again who am I kidding - you're probably one of the paid shills coming from ShareBlue/ShariaBlue who want to destroy freedom and free speech.

Date: Mon, 8 May 2017

From: Maarten Schenk

Subject: Shadowbanning

I can confirm it is real and I've seen it in action but always as a consequence of spam or bot like behaviour on an account.

Look up the currently trending hashtags on Twitter. Make a few tweets on several unconnected ones in a short period of time.

Or pick one and tweet the same URL at it repeatedly in a few minutes.

Suddenly your tweets won't show up in the search results anymore.

By experience I know it seems to take between 24 and 48 hours for the ban to go away. Twitter rarely speaks about it (the only mention I know of is at the end of this article: https://support.twitter.com/articles/18311) because if more spammers knew about it the measures wouldn't be effective anymore.

Wouldn't surprise me if many shadowbanned Trump supporters just happened to be live tweeting a Trump event in a spammy way, used too many hashtags to #MAGA or unwittingly engaged in other behaviour that triggered the spam filter.

Kind regards, Maarten Schenk

Interesting theory. I'm certainly not going to try that with my own account. It might be interesting to make a test account just to experiment, though.

I know that Twitter sometimes tells a user that his account will be turned off for a limited time due to a violation of terms, and that period is something like 24 to 48 hours.

▶  Comment   ∷     ◀Share with FacebookShare with TwitterShare with RedditShare with StumbleUponShare with DiggShare with SlashdotShare with DeliciousShare with Google+
lee-phillips.org is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com.
Quotilizer ... loading ...
Subscribe:   RSS icon twitter icon
Tenuously related: