Section 230 of the Communications Decency Act | Electronic Frontier Foundation

Tucked inside the Communications Decency Act (CDA) of 1996 is one of the most valuable tools for protecting freedom of expression and innovation on the Internet: Section 230.This comes somewhat as a surprise, since the original purpose of the legislation was to restrict free speech on the Internet. The Internet community as a whole objected strongly to the Communications Decency Act, and with EFF’s help, the anti-free speech provisions were struck down by the Supreme Court. But thankfully, CDA 230 remains and in the years since has far outshone the rest of the law.

Source: Section 230 of the Communications Decency Act | Electronic Frontier Foundation

I just read a TechDirt article condemning CBS’ 60 Minutes for disinformation regarding Section 230, which led me to the EFF’s page and infographic.

I respect the EFF immensely, but I remain unconvinced.

The EFF claims that if we didn’t have Section 230, places like Reddit, Facebook, and Twitter would effectively be sued out of existence. Or, even if they don’t get sued out of existence, they’ll have to hire an army of people to police the content on their site, the costs of which will drive them out of existence, or which they will pass on to users.

I don’t see what’s so valuable about Reddit, Facebook, or Twitter that these places should be protected like a national treasure. All three are proof positive that allowing every person to virtually open their window and shout their opinions into the virtual street is worth exactly what everyone is paying for the privilege: nothing. It’s just a lot of noise, invective, and ad hominem. And if that were the extent of the societal damage, that would be enough. But all of this noise has fundamentally changed how news organizations like 60 Minutes work. Proper journalism is all but gone. In order to compete, it’s ALL just noise now.

The EFF compares a repeal of Section 230 to government-protecting laws in Thailand or Turkey, but this is every bit as much disinformation as TechDirt claims 60 Minutes is promulgating. Repealing Section 230 would not repeal the First Amendment. People in this country could still say whatever they wanted to about the government, or anything else. Repealing 230 would just hold them personally accountable for it. And I struggle to understand how anyone — given 20 years of ubiquitous internet access and free platforms — can conclude that anonymity and places to scrawl what is effectively digital graffiti has led to some sort of new social utopia. The fabric of society has never been more threadbare, and people shouting at each other, pushing disinformation, and mistreating others online 24×7 is continuing to make the situation worse.

Platforms are being used against us by a variety of bad actors. The companies themselves are using our information against us to manipulate at least our buying behavior, and selling our activity to anyone who wants to buy it. There was some amount of alarm raised when it was discovered that AT&T tapped the overseas fiber optic cables for the NSA, in gross and blatant violation of the Fourth Amendment, but once discovered, Congress just passed a law to make it legal, retroactively. Now the NSA and FBI doesn’t need to track us any more. Literally every company in America which has a web site is helping to collate literally everything we do into a dossier that gets amalgamated and traded by 3rd-party information brokers. Our cell companies and ISP’s merge location tracking into the mix, and the government picks this information up for pennies on the dollar for what it would take for them to collect it themselves.

I don’t like this situation. I think it should stop. I think anything that would put a dent in Facebook, Twitter, and Reddit being able to collate and track everything anyone does on the internet, and sell it to anyone with a checkbook, needs to go away. If repealing Section 230 forces these companies out of business, I say, “Good.” They want to tell me that the costs to deal with content moderation in a Section 230-less world would put them out of business. I call BS.

If Facebook and YouTube can implement real-time scanning of all video being uploaded to their sites, and block or de-monetize anything containing a copyrighted song within seconds, they can write software to scan uploaded content for offensive content too. Will it catch everything? Of course not, but it will get the load down to the point where humans can deal with it.

There are countless stories of how Facebook employs a small army of content moderators to look into uploaded content, and how it pays them very little, and the job of scanning the lower bounds of human depravity is about as grinding a job in the world. But if they can create filters for pornographic content, they can create filters for gore and violence, and, again, stop 90% of it before it ever gets posted.

Don’t tell me it’s impossible. That’s simply not true. It would just cost more. And, again, if it costs so much that it puts them out of business? Well, too bad. If the holy religion of Capitalism says they can’t sustain the business while they make the effort to keep the garbage off their platforms, then I guess the all-powerful force of The Market will have spoken. The world would be better off without those platforms.

I remember an internet that was made of more than 5 web sites, which all just repost content from each other. It was pretty great. People would still be free to host a site, and put whatever they wanted to on it. It couldn’t be any easier, these days, to rent a WordPress site now, and post whatever nonsense you want, like I’m doing right here. You could even still be anonymous if you want. But your site would be responsible for what gets posted. And, if it’s garbage, or it breaks the law, you’re going to get blocked or taken down. As so many people want to point out in discussions of being downvoted for unpopular opinions, The First Amendment doesn’t protect you from being a jerk.

Facebook, Twitter, Reddit, Imgur, and Google are all being gamed. As the last two Presidential elections have shown, world powers are influencing the content on these sites, and manipulating our national political discourse. This needs to stop. It seems to me that repealing Section 230 would cause those platforms to get serious about being transparent about where that content comes from, and be held accountable for it. Again, don’t tell me that they can’t. They just don’t want to spend the money to do so. In fact, they’re making money on the spread of such propaganda. Tell me why Americans should put up with these mega-companies making billions providing a platform to be used against us politically? Not just allowing it, but being financially incentivized into providing it? It doesn’t make any sense to me.

In summary, I don’t see how repealing Section 230 hurts any of the scenarios that folks like the EFF say that it does, and it would seem to hold all the right people accountable for the absolute disgrace that social media has become.

Liberals and Conservatives Are Both Totally Wrong about Platform Immunity | by Tim Wu | Medium

Everyone is, in short, currently asking for the wrong thing. Which makes it worth asking: Why?

One reason is that this area is confusing, and the idea of making tech “responsible” does sound good. There are, as I discuss below, ways in which they should be. Also, as described below, the mere threat of 230 repeal serves its own purposes. But I think, at its most cynical, the repeal 230 campaign may just be about inflicting damage. Repealing 230 would inflict pain, through private litigation, not just on big tech, but the entire tech sector.

We don’t like you; we want you to suffer. Very 2020.

Source: Liberals and Conservatives Are Both Totally Wrong about Platform Immunity | by Tim Wu | Medium

I’m not convinced by his arguments, but I can’t say his final conclusion doesn’t have a big part in my thinking about the issue.

Facebook “Supreme Court” overrules company in 4 of its first 5 decisions | Ars Technica

As you can see, Facebook has to make decisions on a wide range of topics, from ethnic conflict to health information. Often, Facebook is forced to choose sides between deeply antagonistic groups—Democrats and Republicans, Armenians and Azerbaijanis, public health advocates and anti-vaxxers. One benefit of creating the Oversight Board is to give Facebook an external scapegoat for controversial decisions. Facebook likely referred its suspension of Donald Trump to the Oversight Board for exactly this reason.

Source: Facebook “Supreme Court” overrules company in 4 of its first 5 decisions | Ars Technica

This paints a picture of Facebook being very involved in picking what people can and cannot say about politics, and that’s a very disturbing picture to me. Before this article, I would have thought that they only stepped in on really egregious problems. I’m just not clear why Facebook should get involved in any of the censorings listed here. Let the software automatically block the boobs, and then let people say whatever they want about politics.

The boobs thing really shows why they’re always complaining about needing moderators, and they couldn’t possibly staff up to handle the load. Software has been able to effectively identify nudity for many years now. There’s only a problem because they want to allow some nudity. On a platform shared by, effectively, everyone with internet access, there really doesn’t need to be any. Lord knows there’s enough elsewhere. So I don’t think this isn’t something that they need to waste time and energy on.

The problem extrapolates. They don’t want people to quote Nazis, but they want people to be able to criticize Donald Trump, which oftentimes warrants parallels of speech. They won’t want people to post videos of animal cruelty, but they want PETA to be able to post their sensational, graphical protests, which look real. Facebook hires thousands of people in impoverished countries to filter out the gore and the porn, but none of that needs to happen if you just let it all go. The software can do that automatically. The problem is trying to find some happy mid-point, as if that needed to happen. And there are countless stories about how degrading and depressing the job of being one of Facebook’s moderators is, and I won’t rehash them here.

Things get real simple if you just pick one point of view. Instead, they’re playing the middle, and selecting what speech is “free,” what nudity is “tasteful,” and what gore is “fake.” So, yeah, if you’re going to employ people to censor things things, you’re going to need a lot of people. I have trouble finding sympathy.

As if on cue:

Source: Content Moderation Case Study: Twitter Removes Account Of Human Rights Activist (2018) | Techdirt

Manzoor Ahmed Pashteen is a human rights activist in Pakistan, calling attention to unfair treatment of the Pashtun ethnic group, of which he is a member. In 2018, days after he led a rally in support of the Pashtun people in front of the Mochi Gate in Lahore, his Twitter account was suspended.

Decisions to be made by Twitter:

  • How do you distinguish human rights activists organizing protests from users trying to foment violence?
  • How do you weigh reports from governments against activists who criticize the governments making the reports?
  • How responsive should you be to users calling out suspensions they feel were unfair or mistaken?

We’re constantly being told that Section 230 of the Communications Decency Act is the one, gold standard by which all speech on the internet is “allowed,” and how we can’t ever touch it. It says that companies cannot be held liable for the things that people post to their platforms. So why are Facebook and Twitter bothering to pick and choose what people can say at all? They are legally shielded from any problems. Just let people say whatever they way to say! If it turns out to be illegal, or slanderous, the people who posted those things can be sued by the affected parties. If people don’t like what’s being said, they be ignored and routed around.

I find the whole thing completely disingenuous. Either you have protection, and are for “free speech,” or you don’t, and need to police your platforms. Facebook and Twitter are acting like they need to kick people off their platforms to avoid being sued, but they are not at risk of that. They’re throwing people off their platforms because enough people make noise about them. It’s become a popularity contest, and mob rule. There’s nothing genuine, legally-binding, or ethical about it. That’s it. If some topic or person becomes untenable, they’re going to get the boot.

In the old days, the mob would boycott advertisers, like, say, the ones on Rush Limbaugh’s show. But you can’t do that on a platform like Facebook or Twitter, which use giant, shadowy advertising exchanges and closely-guarded algorithms to show ads to people, and everyone gets a different view, according to their profile. Even the advertisers have a hard time knowing how their ads are served or working! The people who would protest an advertiser would never know what is showing up most often on people’s pages whom they don’t like, and Facebook and Twitter sure isn’t going to tell them. That’s the secret sauce, baby. They can’t know who to go after.

So these platforms are proactively de-platforming people, but I can’t see why. They have legal protection. They can’t be blackmailed by boycotts of advertisers. What’s the mechanism here? What’s the feedback loop? I suspect the answer would make me even more cynical than I already am.

Social justice groups warn Biden against throwing out Section 230 – The Verge

“Section 230 is a foundational law for free expression and human rights when it comes to digital speech,” the letter says. The law protects websites and apps from being sued over user-generated content — making it safer to operate social networks, comment sections, or hosting services. “Overly broad changes to Section 230 could disproportionately harm and silence marginalized people, whose voices have been historically ignored by mainstream press outlets.”

Source: Social justice groups warn Biden against throwing out Section 230 – The Verge

First of all, no it isn’t. It’s a “foundational” law protecting corporations and their precious profits. Don’t pretend that it’s about anything other than the almighty dollar, and Capitalism. Any benefit to people and free speech is accidental. In fact, I recently read that many people argued exactly the opposite of this when it was being debated.

You can only be sued (credibly) for saying something illegal. Why should removing section 230 “disproportionally harm and silence marginalized people?” Are marginalized people more prone to saying illegal things? If so, is that why they’re being marginalized?

Do the people writing this accept that the most-prominent example of people being marginalized, cancelled, and deplatformed are QAnon right-wing nut jobs? Are they advocating that they should NOT be banned on social media platforms? Are they standing up for their rights? No, of course not. We all know this kind of language, and what terms it’s being used as a proxy for.

As I’ve been saying, I want Section 230 revoked. You can play what-ifs about this all day long, but we, as a society, need social media platforms to be accountable — under threat of a lawsuit — for the things they allow on their services. They’d get serious about throwing the child porn and direct threats of violence off their sites in a New York minute.

Will they have to hire more people to do it? No, not even that. They’d have to buy a bunch more servers, and run them to block that junk. I have people tell me all the time that this is beyond current computer science. Bologna. If YouTube and Facebook can scan all uploads in real time for any copyrighted music, social networks can scan for nudes and threatening language in realtime, and at least winnow down the posts that need further review. They’d have to spend a bunch of money on servers instead of throwing a pittance at a bunch of contractors in impoverished nations. Cry me a river. Spend some of those billions you’re making a year already.

Worry that it will affect you? Don’t say or post anything illegal. It’s that simple. What’s illegal? That’s up to the government, not Zuckerberg or Dorsey, or the boards of Facebook or Twitter.

I have literally no sympathy on this point. We’ve tried the internet for 25 years with Section 230, and Facebook and Twitter are literal existential threats to society which have been allowed to develop. Let’s get rid of it, make corporations finally take responsibility for these monsters they’ve created, be accountable for their algorithms and funding sources, spend some of their hoarded blood money, and just see what happens. I find it impossible to believe that this could make the situation any worse for society.