Facebook “Supreme Court” overrules company in 4 of its first 5 decisions | Ars Technica

As you can see, Facebook has to make decisions on a wide range of topics, from ethnic conflict to health information. Often, Facebook is forced to choose sides between deeply antagonistic groups—Democrats and Republicans, Armenians and Azerbaijanis, public health advocates and anti-vaxxers. One benefit of creating the Oversight Board is to give Facebook an external scapegoat for controversial decisions. Facebook likely referred its suspension of Donald Trump to the Oversight Board for exactly this reason.

Source: Facebook “Supreme Court” overrules company in 4 of its first 5 decisions | Ars Technica

This paints a picture of Facebook being very involved in picking what people can and cannot say about politics, and that’s a very disturbing picture to me. Before this article, I would have thought that they only stepped in on really egregious problems. I’m just not clear why Facebook should get involved in any of the censorings listed here. Let the software automatically block the boobs, and then let people say whatever they want about politics.

The boobs thing really shows why they’re always complaining about needing moderators, and they couldn’t possibly staff up to handle the load. Software has been able to effectively identify nudity for many years now. There’s only a problem because they want to allow some nudity. On a platform shared by, effectively, everyone with internet access, there really doesn’t need to be any. Lord knows there’s enough elsewhere. So I don’t think this isn’t something that they need to waste time and energy on.

The problem extrapolates. They don’t want people to quote Nazis, but they want people to be able to criticize Donald Trump, which oftentimes warrants parallels of speech. They won’t want people to post videos of animal cruelty, but they want PETA to be able to post their sensational, graphical protests, which look real. Facebook hires thousands of people in impoverished countries to filter out the gore and the porn, but none of that needs to happen if you just let it all go. The software can do that automatically. The problem is trying to find some happy mid-point, as if that needed to happen. And there are countless stories about how degrading and depressing the job of being one of Facebook’s moderators is, and I won’t rehash them here.

Things get real simple if you just pick one point of view. Instead, they’re playing the middle, and selecting what speech is “free,” what nudity is “tasteful,” and what gore is “fake.” So, yeah, if you’re going to employ people to censor things things, you’re going to need a lot of people. I have trouble finding sympathy.

As if on cue:

Source: Content Moderation Case Study: Twitter Removes Account Of Human Rights Activist (2018) | Techdirt

Manzoor Ahmed Pashteen is a human rights activist in Pakistan, calling attention to unfair treatment of the Pashtun ethnic group, of which he is a member. In 2018, days after he led a rally in support of the Pashtun people in front of the Mochi Gate in Lahore, his Twitter account was suspended.

Decisions to be made by Twitter:

  • How do you distinguish human rights activists organizing protests from users trying to foment violence?
  • How do you weigh reports from governments against activists who criticize the governments making the reports?
  • How responsive should you be to users calling out suspensions they feel were unfair or mistaken?

We’re constantly being told that Section 230 of the Communications Decency Act is the one, gold standard by which all speech on the internet is “allowed,” and how we can’t ever touch it. It says that companies cannot be held liable for the things that people post to their platforms. So why are Facebook and Twitter bothering to pick and choose what people can say at all? They are legally shielded from any problems. Just let people say whatever they way to say! If it turns out to be illegal, or slanderous, the people who posted those things can be sued by the affected parties. If people don’t like what’s being said, they be ignored and routed around.

I find the whole thing completely disingenuous. Either you have protection, and are for “free speech,” or you don’t, and need to police your platforms. Facebook and Twitter are acting like they need to kick people off their platforms to avoid being sued, but they are not at risk of that. They’re throwing people off their platforms because enough people make noise about them. It’s become a popularity contest, and mob rule. There’s nothing genuine, legally-binding, or ethical about it. That’s it. If some topic or person becomes untenable, they’re going to get the boot.

In the old days, the mob would boycott advertisers, like, say, the ones on Rush Limbaugh’s show. But you can’t do that on a platform like Facebook or Twitter, which use giant, shadowy advertising exchanges and closely-guarded algorithms to show ads to people, and everyone gets a different view, according to their profile. Even the advertisers have a hard time knowing how their ads are served or working! The people who would protest an advertiser would never know what is showing up most often on people’s pages whom they don’t like, and Facebook and Twitter sure isn’t going to tell them. That’s the secret sauce, baby. They can’t know who to go after.

So these platforms are proactively de-platforming people, but I can’t see why. They have legal protection. They can’t be blackmailed by boycotts of advertisers. What’s the mechanism here? What’s the feedback loop? I suspect the answer would make me even more cynical than I already am.

Leave a Reply

Your email address will not be published. Required fields are marked *