How to Force 8Chan, Reddit and Others to Clean Up

Aug 07, 2019 · 290 comments
Steven (nyc)
How many here predicting a censorship apocalypse if Taplin's ideas are implemented, realize that they are posting on a *moderated* comments thread?
Stephan (N.M.)
AAaaah! The irony of it. I remember when the country was having a nearly identical argument over Pornography. And the positions were reversed. With the right arguing for standards and rules and left screaming censorship. Now the shoe is on the other foot. Does it pinch? You can't have it both ways folks! You can either have Censorship or you can have freedom of speech. A couple of notes I will throw in for the Irony of it: 1) Would this censorship apply to say Louis Farrakhan or does it only apply to some groups? 2) 8Chan is incorporated in the Philippines US law is ...irrelevant there!
Edward Allen (Spokane Valley)
Huh? No. The problem with speech online is the anonymity. Rather than control speech, make it public. Ask 8chan and Reddit and YouTube and even the comments section of the New York Times to stop allowing anonymous accounts, and reveal who says what vile thing.
Glenn Baldwin (Bella Vista, AR)
Wow, read through the replies here and the number of people advocating wholesale suppression of speech through exposure to litigation or legislation is insane. And people think Trump is an existential threat to our democracy? There's no conceivable firebreak here, one's person's pornography is another person's hate speech. This is a very slippery slope indeed.
Monique (Topeka)
I would be very surprised if anyone could force 8chan to do anything at this point. The site never truly went down, only the easy to access front end. 8chan itself real time propagates itself to a peer network which basically means so long as even a handful of users have the site it will end up being constantly re-established
Colin (France)
Ultimately, the problem with desperate and uneducated people sharing their toxic thoughts online, is not their ability to share thoughts online. The problem lies with desperation and lack of education.
Susan Anderson (Boston)
I found this descriptive very helpful. It's a deep dive into the amoral computer experts of the internet: "The site’s now characteristic tone of performative erudition—hyperrational, dispassionate, contrarian, authoritative—often masks a deeper recklessness. Ill-advised citations proliferate; thought experiments abound; humane arguments are dismissed as emotional or irrational. Logic, applied narrowly, is used to justify broad moral positions. The most admired arguments are made with data, but the origins, veracity, and malleability of those data tend to be ancillary concerns." https://www.newyorker.com/news/letter-from-silicon-valley/the-lonely-work-of-moderating-hacker-news Lotta jawbreaking words in there, but the point is that supposedly moral arguments are being made without regard to the victims of this supposed "freedom" of speech. I often wonder what "freedoms" victims have, when allies of hostile provocation argue for their supposedly "constitutional" right to maim, kill, and discredit ordinary people who are just trying to get through the day.
whowhatwhere (atlanta)
Sincerely asking, not a lawyer. What do legal minds say about imposing provenance laws. I mean how can a platform legally make it so our civil liberties are protected while things do not magically appear on a platform like FB without it being clear to all the users who first posted it? My sense of the complications goes first to how anonymity is useful in social uprisings such as in the Arab spring. The authorities if I remember correctly, just unplugged the services. But what are the other problems and what are solutions toward having it so basically, if you post it first, it can be known that you posted it first. Because my impression is this is not what we have now. Thank you.
ph1 (New York)
The way to solve this problem is not to hold Social Media Companies liable for what people post, but rather require Social Media Companies to know who posted the offensive message. That is, pass a law stating that Social Media cannot allow anyone to post on their sites with out knowing for sure the identity of that person. Why should this great country be destroyed so that Facebook can remain one of the most profitable companies in the world?
julie kreutzer (boulder, co)
Sounds great but it’s absolutely impossible to do. Whether you’re talking about Facebook, Reddit or any of these other groups, there are thousands of people posting, constantly. It would be absolutely impossible financially to have the kind of staff necessary to be blocking, in real-time, those who are posting incendiary messages even assuming you could train people to do so to the degree necessary to avoid liability.
ph1 (New York)
@julie kreutzer Did you read the article? These companies were able to stop porn and Jihadist messages from being posted. If they can stop those things, then they can stop incendiary messages. Why do we have to allow our great country to be destroyed in order for Facebook to make a big profit?
Henry Edward Hardy (Somerville, Mass.)
The idea that Facebook and Google can easily and transparently automatically computer-censor "bad" content without censoring "good" content is chimerical and risible. The first government regulation of radio, then television content, led from the early days of freedom and innovation pretty quickly to an duopoly of two corporations: RCA and CBS. You might want to take a look at and consider, Frank Waldrop and Joseph Borkin's 1938 book, "Television, a Struggle for Power." The power to control and censor internet content is the power to control the agenda of and framing of issues in society. No single government nor corporation should have such power. That's the issue here.
Jomo (San Diego)
A couple of years ago, the US identified a number of Facebook accounts that were actually fronts for the Russian government, spreading disinformation. Slowly and begrudgingly, FB shut down those accounts, and then held it up as evidence of their good corporate citizenship. But they could have done so much more. With all their wealth and technology, they could easily have determined which users received the fake posts, and notified them. Anyone who reposted that info should have been made aware that they had unwittingly spread Russian propaganda, a valuable teaching opportunity. At every turn, the tech giants come up a day late and a (billion) dollar short. Concerns about censorship are valid, but so are concerns that we're allowing a handful of companies and individuals to control a huge segment of our commerce and communications. History has shown that concentrated power leads to bad outcomes.
BeenThere (USA)
Based on very real events going on right now: A mentally disturbed person has a long enemies list. To revenge himself he posts complete fabrications against not only those people, but their relatives he has never met, accusing them of lurid crimes. These results show up when the targeted people are googled. A court has determined the material to be defamatory & ordered it taken down. But the author has gone underground -- taking his computer with him, of course. The legal precedent is from the California Supreme Court, in a recent case involving Yelp. Yelp fought for & won the right, under Section 230, to keep up the material which the court had ruled defamatory. So if you can't find & compel the original poster to take it down, you may be completely out of luck. It's the tech companies which make this material available for the world to see. How is it a just and reasonable result that they can't be required to take down defamatory material? Why should Congress give these companies an immunity from compliance with court orders that no one else has?
Pete K (Austin, TX)
As of this writing, there are 285 comments on this article, all of which fall within Section 230. If the author's suggestion is implemented, the NYT would be liable for the content of any of these comments - which means its staff would have to spend all day reviewing comments and couldn't do their jobs. The only way to avoid liability would be to shut down the comment section. The internet lives and breathes through Section 230, in ways the author does not understand.
Michael Murray (San Mateo)
Yea this is laughable. The author of the article is claiming they would like to go without services which promote communication: there is no realistic version of the world where the services exist while carrying this immense liability.
El (Chicago)
@Pete K The NYT comments are already moderated, aren't they? NYT has voluntarily allocated resources toward this already. However, I'm willing to bet that the bar would be raised a lot higher (all over the internet) for moderated comments, and meaningful discussion would become very much threatened, if the filter were made of liability considerations instead of just common decency considerations.
Sam Browning (Beacon, NY)
@Pete K Losing some of the comment sections on the internet would probably be a net positive to our society.
Susan Anderson (Boston)
This article has surely opened up a hornet's nest of comments about "freedom of speech". But what about the freedoms of the victims of hate? I remember in Boston right after Charlottesville, "freedom of speech" was the cry of the demonstrators, who required police protection. 40,000 people showed up to protest them - all 40 of them. They've done better since then, but the waste of humanity and goodwill is staggering. It reminds me of those who promote universal easy access to machines for killing a lot of people in a hurry. Whatever motivates these skilled computer users to enable the lowest forms of communication, I wish they would put their skills to better use. They have all their arguments pat, but like with guns, I prefer working with other people for good and to solve problems, not going to extremes to protect those whose destructive influence is dissolving the civil society and the country I know and love. They also appear to be making claims of patriotism and calling those who condemn violence un-American. Really? It's time for people of goodwill to come together to solve problems such as climate change, criminalization of poverty, concentration of wealth at the top by replacing jobs with benefits with "gigs". And, in a class by itself, the theft of elections and the rule of the majority by a minority who don't have any conscience about accepting help from some extremely dubious allies (Putin, MBS, and Kim Jong Un come to mind). How much "freedom" is that?
SteveRR (CA)
@Susan Anderson "Those who have long enjoyed such privileges as we enjoy forget in time that men have died to win them." ~ Franklin D. Roosevelt And let me make an addendum: "...and are so free to bargain them away"
Lynn Taylor (Utah)
Yes. Great idea. Change that law and sue the heck out of them all for allowing such hatred to permeate our society and, ultimately, causing the death and injury of so many. Sue them all out of existence.
Oof (PA)
We live in a world where the president conflates resisting racism with racism. It's a common practice to claim that pointing out abuse by the police or other state-actors is "hate speech." In that world, I'd prefer to keep 230 in place and continue to use social pressure and decency as the primary tool for upholding community standards for speech. I understand that's not sufficient in the face of sites like 8chan and in the wake of the recent acts of domestic terrorism. But that's why I differ from the author. The problem isn't that "people are saying bad things on line." The problem is that people are becoming terrorists on line. We don't have to argue about what can or cannot be said. We don't need to get into debates about what constitutes a "thought crime." We have juries, judges, warrants and intelligence agencies, we just need to put them to good use when it comes to identifying domestic terrorist organizations and recognizing the difference between trying to give people a chance to talk vs. aiding and abetting. One of those activities should be broadly protected and held dear. The other should be prosecuted to the maximum extent possible without becoming cruel or unusual.
JBK007 (USA)
Thr president doesn't care, and will gladly forward any racist or hateful tweet that comes across his screen, if it serves his political and personal agenda.
randomxyz (Syrinx)
I read this article hoping for a thoughtful discussion of new approaches to address this issue. I did not get that. What part of “free speech” does the author not understand?
ph1 (New York)
@randomxyz You are right that "free speech" is protected by the constitution. However, there is no constitutional protection for "anonymous free speech." The way to solve this problem is to require Social Media Companies to know who is posting on their site. That way, when someone posts Jihadist messages, they can then be prosecuted.
Joel (Oregon)
If your goal is to completely remove freedom of speech online, this will help achieve that. The only reason these platforms exist is because they were protected from constant frivolous litigation from upset users. I think the author and those supporting him would be more comfortable in China, where everything you post online is monitored by the State and having the wrong opinions is punishable in a social credit system that forces you gradually out of society. If you think it will not happen, just look at the "do not fly" lists created in the wake of 9/11. How simple it is to arbitrarily restrict the freedoms of people who have committed no crimes in the name of national security. How simple it is to expand the definition of a dangerous or suspicious individual who has to be monitored and forbidden from using certain forms of transport. So you say you're only going to censor and deplatform the terrorists and white supremacists? How long until anybody that criticizes the government is deemed a potential terrorist? How long until merely exercising your first amendment right to protest puts you on a list that gets you shadowbanned on the internet? Anyone supporting this is disgusting and unamerican.
Susan Anderson (Boston)
@Joel But murder, mayhem, child sex trafficking, those are fine. Foreign interference in our elections those are fine. Lies, those are fine. Making billions, no conscience required. In fact, quite the reverse ... job decreation for fun and profit ...
Kalidan (NY)
Make them pay damages, and jail the owners. That should take care of it.
Clotario (NYC)
Yes, I agree completely. When someone says something you don't like, silence them by removing their ability to communicate. Better, why not issue imprimaturs so we know only approved messages are allowed to circulate? Only nice, safe and bland communications reflecting the popular thoughts of the day should be permitted. Signed, You friendly neighborhood fascist. Seriously, did no one read Fahrenheit 451? The notion of using offence as the social imperative behind mass censorship is an old idea. And it is just as bad now as ever! And precisely what is the philosophical underpinning behind assuming "we can all agree" specified content should not be allowed on major nodes of communication? No, we don't all agree. Nor can we agree even objectively horrible content creates a social emergency requiring censorship. Beware of moral panics!
Bill (Midwest US)
Verizon owns yahoo. Business divisions should not shield the provider in this case. Additionally, the public has the same inherent right to establish terms of service and guidelines for these businesses to operate under. Congress needs to assert that right and will of the people.
michaelscody (Niagara Falls NY)
@Bill What the public has is the inherent right to agree or disagree with the terms of service by accepting them or not. It is the inherent and complete right of these businesses to set whatever terms they want, as long as they do not discriminate on any of the protected classes. If I set up a social media outlet and demand that every post start with "Hail the great Mike Cody" I have the right to do so. I will not get many subscribers, but i can still do so.
David (Minnesota)
There's a massive difference between CBS, Fox, NBC, and ABC, and social media platforms. By definition TV channels choose what they broadcast - they create the content. Conversely, social media platforms rarely actually create any of the content that you see on them - they merely function as intermediaries for different members to post. As a result, this analogy is completely incorrect: in one case you punish an entity for content they created, in the other you punish the entity for content they have never even condoned. This is an extreme example, but if I write hate speech inciting violence with pencil and paper, is it the fault of the pencil manufacturer?
Chris Kox (San Francisco)
Correct me if wrong. Right now the major players are private spaces. They may function as public accommodation, however, I am not sure. As private spaces the major players already have the power to remove content posted by users. Of course, users have no "right" to speak in any private space -- they have a tentative invitation which may be revoked at any time. (E.g. The Times may choose not to publish this note, and after a succession of unacceptable notes, they may prevent my further access altogether). Users may claim the shield of journalism, and scream foul for "free speech" however, but without government intervention, the private space is within its right to remove them. We should be wary of declaring the major players to be publishers, for if so, they would suddenly inherit rights which they currently may not have, those derived from the first amendment. As to the issue of liability, Times v. Sullivan is a good place to start if Facebook is suddenly declared a publisher.
Dan (VA)
I agree that something must be done, but disagree that censorship is the answer. One of the catalyzing forces of the internet is anonymity. This not only lets people say things they might not normally say, but allows bots to assume anonymous identities to spread disinformation. If we required some form of positive registration through which someone could be identified, a lot of the trash on the net would dry up and blow away.
Nick (Portland, OR)
I'm trying to *reduce* the ability of the largest companies in the world to track me.
ph1 (New York)
@Nick "I'm trying to *reduce* the ability of the largest companies in the world to track me." That is easy to do: do not join social media sites.
ph1 (New York)
@Matt: You are right that "free speech" is protected by the constitution. However, there is no constitutional protection for "anonymous free speech." The way to solve this problem is to require Social Media Companies to know who is posting on their site. That way, when someone posts Jihadist or White supremacist messages or post defamatory remarks against an innocent citizen, they can then be prosecuted by the judicial system.
marsha zellner (new haven)
who will do the policing? I believe here in the NYT there was an extensive article on the people Facebook hired to scan for content---and the terrible effects this had on them. As another commentator suggested: no post can be anonymous. That allows for no repercussions for your actions. Let the hatred be clearly assigned to those who spew it. Yes some people will use pseudonyms, etc. but it will be much harder. Maybe it will cut down on online bullying too....
Ted (Nantucket)
It's ABSURD to equate poisoning people with bad medication to allowing people to read things on the internet. How can the NYT print that sentence in good faith? Words are not poison, they are not violence. Good luck running a paper if this becomes the climate around language.
JBK007 (USA)
@Ted However, hate speech is poison, particularly when it purposefully incites violence, and cannot be tolerated, even when the free speech card is played.
michaelscody (Niagara Falls NY)
@JBK007 Even if you are right about "hate speech", and I do not believe you are, it still leaves open the question of defining "hate speech". Is "All Hispanics deserve to die" functionally equivalent to "All bigots deserve to die"? Both are offering the threat of violence to a specific group, correct?
Scott (Atlanta)
The author of this piece misidentifies Verizon as a passive platform. It's an ISP. Comparing it to Facebook and YouTube is an Apples to Oranges comparison even though all of the companies mentioned including Verizon scape your data and sell it to the highest bidder. If these companies can know where you are, what you do, and what you like (enough that advertisers want to buy it), they can certainly find a way to limit this type of content
Roarke (CA)
To the folks that say changing this law would drive social media companies out of business: Facebook has been fined $5 billion recently, and their stock rose because it was such a piddling amount compared to their revenue. What's to say they can't spend another $5 billion improving their system, or fighting off lawsuits if they don't improve their system? Regulation doesn't run giants out of business. It just forces them to spend money they'd prefer to spend buying yachts, mansions, and politicians.
sjs (Bridgeport, CT)
@Roarke Truer words were never spoken
Dave (MN)
What about an alternative approach: if a social media service didn't want to moderate their content then they would be required to determine the true identity of every social media user/account, in the same way that banks are required to "know your customer" (https://en.wikipedia.org/wiki/Know_your_customer) to help prevent illegal transactions. One way or another somebody is responsible for every post. In the case of social media the purpose would be to prevent trolls, foreign governments and other bad actors from using social media, and to provide information regarding the real people involved with criminal posts. Ideally the public social media identifier would include a person's real name, but I'm not sure if that is absolutely required, however it would take away the ability for people to say stuff anonymously. Social media accounts that are owned by foreigners would be identified as such: foreign and not authenticated. Foreign governments are welcome to provide authentication services but the accounts would still be visibly marked as foreign or marked with their country of ownership.
michaelscody (Niagara Falls NY)
@Dave KYC is in place to prevent illegal transactions. Trolling is not illegal.
Edward Swing (Peoria, AZ)
Of course it's good for platforms like Facebook and Reddit to improve their moderation (both automated and human) but the issue is that removing their Safe Harbor protection is too much of a blunt instrument. They incur costs in developing moderating software or paying human moderators. Realistically, the social media, discussion, and other online platforms that people use can't have perfect moderation any time soon. The viability of those platforms could be threatened by opening them up to lawsuits about content posted on them. By all means, nudge them towards continually improving their moderation, but we don't want to drive them into bankruptcy over something that's to some extent beyond their control.
Nick (Portland, OR)
Any law that holds social media platforms accountable would have to be created in a manner that doesn't increase our current "monopoly" problems in the tech industry. Tailoring the law so that it requires companies to have the tools that only a few companies can create would have vast, negative consequences.
richard cheverton (Portland, OR)
Speech isn't free if every client-chasing lawyer in libelville can sue it out of existence. Once on the slippery slope, there's no stopping the descent into jihads against what someone gets to define as "hate" speech. We have seen previews of that coming attraction in universities. Oregon just made uttering "hate" speech into a felony. Prosecutions are sure to follow. These are tenured types who really, really like the idea that someone (guess who?) will be empowered to police speech--all in the name of protecting some superior virtue, which they get to define. They are parasites on our current fears. Their speech ("undermined democracy," indeed!) will not be hauled into court. The merciless protectors of the public's delicate sensibilities will enjoy the merry ride down the slope. Along the way, they will leave the folks (and it won't just be deep-pocketed corporations) they despise bankrupt, publicly shamed, probably unemployed in their wake.
Jeff (Boston)
@richard cheverton Laws against hate speech generally work in other countries, Germany for example. Maybe they just understand the consequences better than we do?
A D (Colorado)
Saying you can’t tell the difference between hate speech where it becomes illegal content on 8chan and university debates about who to invite to campus is no better than the slippery slope argument that banning high capacity automatic weapons would lead to banning a gun. Reasonable citizens know how to pass reasonable laws in the US, and although a full repeal of what this article is suggesting is too much, agree with you there, we can DEFINITELY FIND A WAY to hold these companies more accountable to cleaning up their filth they are subjecting to young minds.
Ken (Connecticut)
How long until the Trump controlled FCC begins to filter out any content supporting the Democratic Party or its views as "Dangerous Socialism"? Censorship laws in the hands of this particular government could backfire in a number of obvious ways.
P. Lamar (Atlanta, GA)
@Ken Censorship laws in the hand of ANY particular government WOULD backfire in a number of obvious ways.
P. Lamar (Atlanta, GA)
@Ken Censorship laws in the hand of ANY particular government WOULD backfire in a number of obvious ways.
AJPR (Chevy Chase MD)
Why wouldn't this work for gun manufacturers too? "Immunity statutes grant legal protection to gun manufacturers and dealers, shielding them from liability for a wide range of conduct. In 2005, after intense lobbying from the gun industry, Congress enacted and President Bush signed a law that gives gun manufacturers and sellers unprecedented nationwide immunity from lawsuits." Gifford Center
Edward Swing (Peoria, AZ)
@AJPR I'm liberal and I support many types of gun control, but removing that protection is a bad idea. The fact is that we're talking about a product that is legal to sell and buy, there's no confusion about the risks associated with it, and it's functioning precisely as it's designed. The only reason one would consider civil lawsuits against the manufacturer in such a case is as an end-around congress (where, unfortunately, little or nothing gets done around gun control). And, of course, those civil suits would not be successful - their sole purpose would be to drain money from gun manufacturers through legal fees. Nobody should feel comfortable with that approach - it could (and would) lead to increasingly using that tactic against liberal organizations in other contexts. As difficult and frustrating as it often is, we should focus our gun control efforts on Congress and state legislatures, not on weaponizing spurious civil suits against gun manufacturers. New laws (not civil suits) are the best and smartest way to address gun issues.
SweePea (Rural)
How do you stop a person from photo-shopping a photo to some heinous effect and distributing it as paper flyers in their town? The "problem" seems to be of scale, not of kind.
michele (syracuse)
Scale matters, as it should. If you poison your own well, that's stupid; if you poison a reservoir that serves 2 million people, i that's a crime.
William Fang (Alhambra, CA)
Social media companies are embedding ads with contents. It's gotten to the point that when I'm on the NY Times website using Google Chrome, I don't know if an ad is placed by NY Times or Google. And the ads are often placed so it looks like content. Facebook makes ads look like a newsfeed item. In addition, these companies try to place ads that suit my viewing pattern. I get too many dog, financial, Chinese language, and male care products for the ads to be random. Thus I think it's fair that platforms that extensively exploit contents should be made to be responsible for the content they deliver.
Dan (Arlington, VA)
Maybe instead of shutting these sites down, law enforcement should be monitoring these sites. People who are primed to be radicalized will find a way. So why not let them congregate on these sites and have law enforcement monitor the sites to identify people who might act out on their impulses?
John F. (Pennsylvania)
I have been teaching and talking about Section 230 for more than 15 years. I am happy to see this Opinion. The original purpose of 230 was to protect internet companies as the internet developed. Today, 4 of the top 5 and 5 of the top 6 largest companies in the world are internet companies (if you include Apple). They can figure this out. I disagree, however, with one key part of this. If you are going to eliminate the Safe Harbor, eliminate it completely. If you leave it in an altered form, creative people will find way to use it to continue publishing the same content. No other industry has this type of Safe Harbor and there is no longer an argument that internet companies are fragile newborns that need extra protection. I am confident that if the Safe Harbor was eliminated tomorrow with a sunset on 9/1/2020, these companies would figure it out. Or, new companies would figure it out and push the current ones aside.
michaelscody (Niagara Falls NY)
@John F. UPS cannot be sued or prosecuted if they deliver a letter bomb. My supermarket cannot be sued or prosecuted if their swap board has a racist rant pinned to it. Many other industries have the same sort of Safe Harbor protections, just not codified into law.
Chris R (Pittsburgh)
I cannot disagree with the author vehemently enough. The safe harbor rule does protect knuckle dragging troglodytes but it also protects every other person who isn't a passive consumer of content. Obviously, the idea behind withdrawing the safe harbor rules is to combat the spread of dangerous and destructive ideas. Ideas that do not conform to our community standards and, as such, should be policed with the full force of the law. However, who decides what makes an idea dangerous and destructive? In my life time advocating for LGBTQI rights was seen as a clear danger to society. Publishers, broadcasters, and the life were clearly and continually discouraged from providing realistic and nuanced portrayals of LGBTQI individuals. In other times advocating for unions, social justice, family planning, and more were also seen as dangerous to society. While we may believe that what makes an idea dangerous and, thus, liable to censorship is straightforward our zeitgeist is not immutable. The end result being that laws like this will be turned against those that fight in future struggles for equality and freedom. Obviously, this sounds like a slippery slope argument but we have seen this happen in the past. If it has happened before it will happen again as societal forces change and evolve in ways we cannot predict. It's best for us to leave open this space where ideas - even unpopular and dangerous ones, can be voiced and confronted.
Jill (Portland)
"to clean up" is to censor , and to censor expression, however repugnant, is oppression
Robert (Out west)
It’s not a question of popularity. It’s a question of not inciting to riot and murder.
Eric Black (Arizona)
@Jill do online pedophile rings need to be "cleaned up"? Or would that be oppression?
northlander (michigan)
They want a race war. Nothing less.
Glenn Baldwin (Bella Vista, AR)
Make the internet more like television? Like, a safe space for overweight, middle brow, grans to watch "The Great British Baking Show"? You all have these NYT sidebars, just leave Reddit alone (which, btw, is a highly moderated space, just that their idea of "offensive" isn't the same as yours).
Blaire Frei (Los Angeles, CA)
Social media sites more than anything else need to actually enforce their rules of conduct fairly and consistently. They have proven incapable of this time again in the name of trying to be "neutral", even though neutrality is never possible for a for-profit company with as much power as Facebook. YouTube for example has been well known to demonetize queer creators or queer content because it isn't "advertiser friendly", while giving creators who repeatedly harass, target, or dehumanize queer people a free pass. Of course, getting these companies to care about addressing violent ideologies that thrive and recruit on their sites would require them to care about equality or the right thing. But the only thing these companies care about it "engagement", and fascism, racism, xenophobia, and homophobia are very "engaging" topics.
PrairieFlax (Grand Island, NE)
Nothing stopped The Unabomber, Columbine etc., long before the vile websites. What this will do is reduce mass shootings (hopefully - good) but probably not eliminate them (bad).
Lou (From a different computer)
Twitter suspended the account of one of the shooters for violating its terms of use. Of course, the suspension was for the off site, real life violence. One of his followers had posts with links to the Patriot Front and racist YouTube content. These actually violated the terms of use, before someone physically got hurt.
Bob (Minn.)
Included on the clean up list should be websites like Tucker Carlson’s “The Daily Caller” which regularly posts opinion pieces by white supremacists, conspiracy theories and edited photos and videos that then go viral.
Scott (Illyria)
Wow Mr. Taplin does not know what he's talking about. First, 8Chan is a Philippines-based site that is out of U.S. jurisdiction. Eliminating safe harbor won't affect it a bit. The only solution is to block access from the U.S. to the site (and even then, users will probably circumvent using VPNs and other tools, similar to how users get around the firewall in China). The very fact that Mr. Taplin lumps in 8Chan with Facebook and others shows that he doesn't know what it even is. Secondly, Mr. Taplin falls into the misconception that C.D.A. 230 absolves social media companies of any responsibility to moderate their sites. He needs to read Sarah Jeong's excellent column on what C.D.A. 230 really does: https://www.nytimes.com/2019/07/26/opinion/section-230-political-neutrality.html?searchResultPosition=1 As well as this Slate article: https://slate.com/technology/2019/02/cda-section-230-trump-congress.html The danger of just repealing C.D.A. 230 is that then the only way Facebook, et al then have a hope of avoiding lawsuits is to completely abandon any pretext of policing their websites. In other words, they just become a pass-through provider like the telecom companies. Of course, that WOULD turn them into 8chan. We need intelligent regulatory reforms for social media, but first people need to know what they're talking about.
SWLibrarian (Texas)
Take care. This is just the opening the neo-fascists who support Trump's campaign of dehumanizing hatred are looking for to silence their critics. Trump is their puppet and too foolish to understand the harm silencing media can do. Sensible gun control laws, mental health initiatives, and a president who stops spewing hatred are what we need.
WestCoastBestCoast (D.E.I.)
Oh boy, you are just in lather to bring down corporate censorship on us. This sort of chilling effect is so much more damaging to a democratic society than "hatred, violence and QAnon conspiracies." How long do you think it will be before criticizing politicians, the wealthy, the elite, etc. will be considered "a toxic screed" and banned? Given history, my money is on "immediately".
Daedalus (Rochester NY)
Always fun to read calls for censorship from those who would be among the first casualties. And while we're here, why does every guest columnist seem to have a book in the works?
sjs (Bridgeport, CT)
Sounds good to me.
Jenna (Harrisburg, PA)
Why is pornography lumped in here with violence and hatred? Once again, why is sex considered as dangerous as violence? To be sure, there are bad actors in the production of porn that include violence. However, policing that can be done without banning porn from the Internet. Other than that, I think this piece has good ideas.
Sue (Ann Arbor)
I don't understand how reddit is getting lumped in with 8chan in this title.
SD (Maryland)
The Times doesn't seem to have a problem with censoring content, especially when it comes to any opposing views of progressive thought.
Jared (Toronto)
The author seems to neglect to mention that the task of monitoring and censoring millions, if not billions of photos, messages, and videos(!!!) each day is monumentally more difficult than filtering through and maintaining controls on one's own content (akin to TV networks). Not to mention the ridiculous double standard that is now being applied. Would Facebook, Instagram, and Snapchat censor "The Terror of War," one of the iconic photos of the Vietnam War? How about LGBT+ peoples' expressions of their sexuality? Who defines what is good and what is censored? It's irresponsible to publish an article that has been so poorly thought through. We should expect better. Then again, maybe we can all cash in on big tech class action lawsuits because of knee-jerk laws passed on this basis!
Robert (Out west)
I think what I’m tiredest of is the sheer gutlessness of the people who dote on 8Chan and the rest. You want freedom? Fine by me. Try taking some responsibility for it. And by the way, WE built that freedom and the technology you’re using, not you. You wouldn’t know how.
EB (New Mexico)
Sound reasoning, good solution.
Christopher (Van Diego, Wa)
Agree completely.
Dan Ari (Boston, MA)
You blithely ignore the vast influence of community censorship and the FCC. Perhaps you are too young to remember when blacks and gays did not have the place on broadcast television that they have now, even though the current representation falls short.
Mmm (Nyc)
What a terrible idea.
Paul (Austin)
I guess we need to hold FCC liable too? Was just talking to my SO about a new pocket knife I have. She segued to how she knows how to stab people now from watching Outlander. She then goes on to explain the best way from the front (under the rib cage and up) and from the back (go for the kidneys) So now do I have to be concerned that with her new knowledge she will go on a spree with my new knife? For clarity to some, of course I do not have to be concerned.
Matt (New Jersey)
As a software developer with a degree in computer science, I am absolutely appalled by this opinion piece. Section 230 of the CDA is the most important law in technology to protect freedom of speech online. Software developers are not gods and the software we write cannot control the actions of another person. If you bash someone's head in with a hammer, do we hold the manufacturer of the hammer liable? No. You hold the individual responsible liable. Eliminating section 230 of the CDA would destroy the internet as we know it and make jobless tens of thousands of people who work for small internet companies. I highly urge everyone who reads this to visit https://www.eff.org/issues/cda230 to understand the very significant fallout of doing what Mr. Taplin suggests.
Peter Blau (NY Metro)
Taplin's argument is illogical and unpersuasive. Insofar as the internet's role in mass killings is concerned, all the examples he cites are of content posted AFTER the killings took place. Yes they are disgusting and offensive. No they are not causal. Taplin does not explain how the ability to sue an internet company would result in more consumer safety, as opposed to simply more nuisance lawsuits and legal fees. The comparison to Tylenol is ridiculous. A pill you swallow can kill you; a written article, video or photo cannot. The comparison to network TV is also specious: the public cannot upload content to a TV network, hence the TV industry does not face the same challenge in regulating content as do the internet companies.
Frunobulax (Chicago)
This is an exceptionally bad idea. Cyanide in a Tylenol capsule may actually kill you while crazy, violent, and repugnant speech posted on a website, on it's own, is inert, requiring a physical actor in the real world to load up the truck with an ammonia bomb or pull the trigger of the assault rifle. Private companies are of course free to monitor and censor content, as they do now, but it's absurd to extend potential liability in this way simply on the basis that content is hateful and grotesque.
Jim Linnane (Bar Harbor)
Could it be that big media like the Times and the Washington Post think that they can convince politicians to destroy competitors like Google and Facebook by promoting ideas like this without realizing that if Congress can punish YouTube for what its users do Trump can punish the Times for what it chooses to publish?
RT (Colorado)
No, we can’t all agree that pornography is bad. Jihadist videos, yeah, I have a problem with that. White terrorism, or any kind of terrorism or calls for violence, that can go too.
Brian (Ohio)
This is impossible to do without infringing on free speech. If you think I'm wrong just tell me where the line between speech and hate speech is. Maybe you think the 1st amendment is out dated because of the internet. Maybe people really can't hanble it. Right now the USA is the only place with free speech i think it's worthwhile even if people die because of it. It's also counter productive. Jihadists now use encrypted forms of communication. Forbidding something always makes it more attractive. It also indicates a basic flaw in the censors world view that can't be reasonably defended. Millions of young Muslim men really are in a position where suicide bomber seems like there best option. I think the use of force against ideas will be just as effective here as it has been in the war on terror.
Pat Choate (Tucson, AZ)
Hate sells. The social media needs to be liable for what they publish. Democracy is at stake and hate-for-profit media needs to be tightly regulated out of existence.
Chris Anderson (Chicago)
We don't need more censorship. I don't want to see the day I am governed by liberals and forced to their way of thinking. The internet should remain free from any controls.
paul gottlieb (East Brunswick, NJ)
Wait! We're going to give Donald Trump and his corrupt Attorney General William Barr legal tools to attack social media companies that they don't like? How do you think that's going to work out?
Matt (New Jersey)
@paul gottlieb - Very very poorly.
mpound (USA)
Too bad the author failed to include Twitter in his list of socially irresponsible platforms. Twitter does more damage to civil discourse and simple human decency than all the other platforms combined. Imagine how much more pleasant life would be if Donald Trump, the AOC "Squad" and all the other countless numbers of gassy psychos across the political spectrum no longer had that forum to spew their useless garbage back-and-forth.
Jacob (New York)
Putting Reddit in the headline next to 8chan is misleading and irresponsible. the article itself barely mentions Reddit, and Reddit admins have in recent months been more aggressively banning and quarantining toxic communities. Making it seem equivalent to 8chan is just wrong.
Herr Andersson (Grönköping)
Absolutely. Repeal section 230 now. Make the internet safe again.
M Clement Hall (Guelph Ontario Canada)
By all means let's have censorship -- by all means!
Steven Georges (Princeton, NJ)
How about a ban on « live streamed video? ». The Networks have a regulated 7 second delay, but employ censors. Social Media companies could have as short a delay of « live » streaming as they need to block content, (such as video streamed by the Christ Church shooter.). If it takes them a day to monitor content, then so be it. Otherwise “live” will become a dangerous weapon of mass destruction. Common carriers/internet service providers should be held to the same requirement with substantial liability if they fail to act respond responsibly in the public interest.
DJM (Colorado)
Censorship questions aside, AI is simply not up to the task, and likely won't be for quite a few years. Simple coded phrases or tangential references are enough to fool an algorithm. Ever try to keep up with the latest teenage slang? You get the idea. We'd be better off codifying existing restrictions on speech -- no libel, "fighting words", incitement to violence, false alarms, etc. -- for online content. Think of it as a base-level Terms of Service to which all providers are expected to adhere. Then allow readers to flag violations and require the hosting company to review flagged posts within, say, 24 hours, and remove any that violate the rules. If they fail to do so, the company then becomes liable for the content.
Douglas McNeill (Chesapeake, VA)
Everyone got their knickers in a twist when CBS broadcast Janet Jackson's "wardrobe malfunction" at the Super Bowl and they paid a substantial fine. 8chan et al. foment violence and spread hate and no one is responsible and no one pays. Advocating violence IS pornography, right up there with videos of killing puppies or dismembering living children. And those who publish these words, these images, need to be excoriated. Freedom of speech, even hateful speech, is important but the First Amendment cannot be a shield and a suicide pact.
Tawny Frogmouth (Melbourne, Australia)
The article says "I believe we can all agree that mass murder, faked videos and pornography should not be broadcast...". No, we can't. Take pornography, for example. It's apparently common on German television. Sexually explicit movies, which would not be considered pornographic in Australia but very probably would be in the US, have been broadcast on a free-to-air government television station in Australia. You could even argue that the government should provide free wholesome (feminist, non-violent) pornography in order to drive out the deluge of violent and bizarre commercial pornography, which is widely held to be a major cause of sexual and relationship difficulties in young people. You could argue that suppressing mass murder videos insulates the population from the horror, a handy thing to do if your aim is to sell firearms or lead your country to war. Define "faked video". I contend that many mainstream "news" reports fall into that category because of bassy sound effects that are meant to make the viewer anxious and misleading editing. It's why I prefer the written form to get my news. What I think you meant to say is that you want to impose your views on everyone else. The trouble with that is, I have no idea who you are and you do not live in my country.
rhdelp (Monroe GA)
Those who wrote the Declaration of Independence and the Constitution could never have anticipated the Internet or assault rifles on the streets. Would they be protecting hate groups that actively promote violence towards others that make schools, public places unsafe for citizens? Would they reach the point, as Sean Hannity suggested schools be surrounded with walls, armed guards outside and on every floor? Would they place armed guards in every public place in order to protect the few carrying guns and weapons that only belong in war zones? I have no right to harass my neighbor or generate hate towards them and teach my children it's ok to terrorize others. Get off the free speech band wagon, the law states, All men are equal. Trump should be removed from office. What he is doing is deliberate and vicious.
michaelscody (Niagara Falls NY)
@rhdelp Well, they did promote violent revolution against the legal, established government of the colonies, did they not?
Melting Pot Citizen (Olympia)
And if Zuckerberg thinks that AI can cure these problems in even 10 years, I have a bridge to sell him and anyone else who thinks so. My God, what hubris! (Having worked in the field of AI with some very smart people since before Zuckerberg was born.)
edtownes (kings co.)
Mr. Taplin is extremely persuasive ... up to a point. My 2 serious reservations about his piece are these: 1) I'm not sure that TV is "better" for having moved away from 7 unsayable words to where HBO et al have brought us, but trying to "put a lid on" the dark side of our humanity - be it lust or violence - is way harder (and way more likely to fail) than the author indicates. 2) Yes, the author grapples with "the slippery slope" re freedom of speech. But the closer you look, the less viable his recommendations appear. Take some of Bernie's campaign speeches in 2016. You and I may agree that railing against insurance companies is 100% different from railing against Muslims or Jews, but the first time somebody plants a bomb outside the Hancock tower or Transamerica's, you'll see what a perfectly PREDICTABLE "unintended consequence" looks like. Think of it this way. Wine may represent some things of enormous historical and sensory significance ... while it shortens some people's lives due to liver damage. Capitalism and democracy have many magnificent accomplishments to point to, ... but there are no small number of "exceptions," and the latter are absolutely inevitable. Facebook and Twitter (as much or more than the 2 in the headline) have monetized ALL of our emotions. To insist that they stick to "just good ones" is absurd. The Times had an instructive piece a week or 2 back weighing "Is BDS 'hateful'?" I sure don't trust Facebook to answer questions like that!!
Jay (Vermont)
Not this again. There's plenty of good cause to be mad at social media giant's abject failure at moderating harmful content. The answer to that problem is absolutely not repealing Section 230. Should Section 230 be repealed, any website with user submitted content could be liable for injurious speech posted by anyone. That would include defamatory speech. Thus, a site with user submitted content would need to review every single submission for its truth or falsity, and its tendency to harm the reputation of the subject of the speech. Can anyone here envision an algorithm to do that? If someone posts a comment on a NYT article that their boss has worn the same diaper for 10 years, the boss may never be able to find the person who wrote the comment, but they could file suit against the NYT for defamation in the absence of 230. Whether they would survive a motion to dismiss is another question, but the suit would be much more plausible. The result would either be every site enlisting an army of lawyers to review every user submission, 24/7, or, the end of user submitted content. There must be better, less nuclear, solutions proposed. Those advancing this solution lack legal knowledge and it shows.
BeenThere (USA)
That wouldn't make it into the Times' comments because the comments are moderated. That's the whole point -- the Times is acting responsibly. Other tech companies could do the same, but they don't bother because Congress has given them an immunity no one else has.
TCP (MA)
As much as I hate white supremacy and bigotry, I hate censorship even more! Furthermore, it isn’t going to work, and it is going to backfire on us. Let’s imagine that the Internet was available in the 1950’s. I can envision scenarios where they would take down discussions of desegregation, feminism, homosexuality, and abortion as “offensive content.” Can you think of a law that wouldn’t also be used against these things? Furthermore, banning discussions of white supremacy isn’t going to make it go away anyway (unfortunately). It will just drive them underground to the Dark Web. Something needs to be done, but this isn’t the answer.
Max duPont (NYC)
Every private enterprise had the right to ban customers who violate its codes. Alas, these internet platform hucksters have no code of ethics, they care only about ad revenues. Any private enterprise can choose to ban hate speech. The first amendment does NOT apply; it only restricts the government not private enterprises. Shame on these hucksters.
willt26 (Durham NC)
Please don't mess with the internet. Silencing bad ideas doesn't make them go away- it makes them stronger. We used to have the courage to challenge bad ideas with good ones now we are just cowards that are afraid of words.
Matthew (Nj)
Problem is who decides what toxic content is? Certainly nothing can go wrong, right? We’ll about a board of the must trustworthy censors, they will be above politics, always with the very best intentions. My goodness how naive. Plus it would never stand up to Constitutional scrutiny. So back to the drawing board for you.
Southern Boy (CSA)
Ever heard of the first amendment?
MK (New York, New York)
I don't understand how anyone can possibly suggest this. The model with which companies like facebook and youtube remove content is reactive. It has to be. Anyone can upload anything on these sites, and the only way their moderators can tell if they violate the sites' rules is by humans checking them after they've been uploaded. If youtube could be sued for any content that's on their site, they would have to get rid of their open upload system. Since they can't possibly have enough moderators to check all videos being uploaded, the only way to guarantee that for example no white supremacist video is up for any length of time would be to make sure none of the videos being uploaded are white supremacist before they're uploaded, an impossible task. Even with the current restrictions, I very much doubt that none of these types of content make it through the filters. Does youtube have enough moderators to check if a speech uploaded in an obscure Indian or Eastern European language qualifies as hate speech? All of this also doesn't even touch the slippery slope/vague definitions of hate speech problem. This is not an argument for free speech absolutism by the way. It is absolutely a good thing that youtube and facebook remove ISIS and NAZI videos from their platforms. But allowing sites like Facebook, youtube, and Reddit to be sued for any content on their websites basically means the end of the open internet, something that people seem to be forgetting the benefits of.
Barbara Franklin (Morristown NJ)
I’ll believe them when they suspend Trump for inciting hate and massacres. And the rest of the media when they stop carrying it as front page EVERY day - numbing us to this and elevating their platform. Yes, that’s you, too, NY Times. At least put in the context of “When Cummings initiated investigations into Jared and Ivanka, Trump threw yet another twitter temper tantrum as a distraction” give it context and minimize its import. It’s only democracy that’s at stake.
Xtine (Los Angeles)
The safe harbor provisions of the law were amended to implement SESTA-FOSTA, resulting in making thousands of consensual sex workers unsafe. Perhaps we need to look at our country's penchant for endorsing violence, while pearl clutching when it comes to sexuality. A sex worker cannot interact with clients online, while mass murderers and white supremacists are endorsed by those who embrace "free speech" on the Web. Oh, the irony.
David (New York City)
I agree with the idea of holding sites responsible. How about also having a statute that requires people who post to register with a verifiable address and identity. I imagine the level and hate and racism would drop dramatically if that person were no longer anonymous.
JosieMosch (NorCal)
LETS FORGET ABOUT THE SMALL FRY and start pressuring Twitter to remove Drumpf . He has no 'right' to be there and if we start pressuring them in a concerted way, the climate is right to make it happen. He is the POTUS and he can communicate to people in myriad other ways. The FIRST AMENDMENT does not guarantee access to Twitter. TWITTER should apply its own rules to him , just as it does to you and me.
jerry brown (cleveland oh)
Big Government solutions scare me. It is the single worst attribute of the Democratic Party. The solution to the problem has already been written in this very article: "In 2015, when I started researching my book on how Facebook and Google undermined democracy, there were over 40,000 jihadist videos on YouTube. At the time, YouTube seemed to be ignoring the problem. However, after Proctor & Gamble - one of YouTube's largest advertisers - found one of its ads on an Islamic State propaganda video, YouTube started to clean up its act. In the same way that it used artificial intelligence to keep pornography off its network, it started to block the upload of Islamic State videos". Can we please just give the free market a chance to solve this problem? Unintended consequences lurk down the road of bureaucratic governmental overreach. Thanks.
Tony Smith (Staunton VA)
Brilliant suggestion! Removing liability protection traces its roots to the Supreme Court ruling that yelling "Fire!" in a movie theater is not protected by free speech. Likewise, yelling "Kill ______ (name your victim)" should likewise not get protection as free speech. Corporations like Facebook will nimbly find innovative solutions to protecting free speech while operating within the very broad confines of constitutional protection against its abuse if it's in their corporate interest to do so. Let's lift the safe harbor protection to unleash corporate innovation to protect us against constitutional abuses of free speech.
Chris R (Pittsburgh)
@Tony Smith As an aside - saying 'Go and kill this person now!' is not protected speech.
Jason Smith (Houston)
"Some may argue that deciding what counts as toxic video content is a slippery slope toward censorship. ... I believe we can all agree that mass murder, faked videos and pornography should not be broadcast — not by cable news providers, and certainly not by Facebook and YouTube." Maybe we can and maybe not. Who gets decide what is and is not fake? Who decides what is and is not pornography? What about "toxic" content that does not fall into those categories? How is every piece of content going to be evaluated? Ultimately someone has to make the call, and that is a power that can be abused. You call out Facebook and Youtube as inappropriate forums for what you deem "toxic" content. But this is only because those companies chose to be family-friendly environments. What about 8chan and Reddit where users expect some of this content? How does this kind of censorship even address what you identify as the problem to be solved? Many years ago, I saw a "toxic" video on the internet. It was a brutal video of an Al Queada member decapitating an American soldier. Much like the mass murder videos you call toxic, it was a disturbing display of a violent reality. Not appropriate for all forums to be sure, but should it have been censored entirely? Banned from the internet with anyone who hosted it open to legal action? Who gets to decide?
TrumpTheStain (Boston)
These large social media / uber mass media companies are snubbing their noses at regulators, legislators, public sentiment and entrenching themselves in order to maximize revenue. Their goal is to addict users to their platforms by proving highly sophisticated curated content and advertizing. People aren’t bumping i to teither the content or ads. It is pushed/published/curated fir them. The moral component aside they need to be broken up, regulated and those regulations need to be enforced. We need privacy and security laws like “right to be forgotten”. User agreements need to be simplified and pared down. One group of university researchers printed out an entire user agreement that took them 30 hours just to READ it. The double acceptance clicks doesn’t change the deceptive and practical reality that NO ONE reads those agreements except university researchers and the smarmy corporate attorneys who draft them. Our capitalism out of control culture acts more like casinos who rationalize client losses as “they know the risk”. No, most people don’t. In addition, the feckless, corrupt administration in DC encourages this, at least by default.
Rupert (Alabama)
I view these platforms as an existential threat to our democracy. Therefore, I do not care that some small internet companies may go out of business as a result of the author's suggested revisions to the governing law. If we have learned anything in the internet age it is that there may indeed be such a thing as too much free speech.
Camilla (New York City)
I disagree with the author on several levels. First, the author claims that to uphold our democracy, we must violate the freedom of speech and freedom of the right to congregate of minorities within our nation. Second, as horrific and unjustifiable the acts of terror has been, we have been neglecting the root cause of domestic terrorism. It's so easy to chalk it up to calling these countrymen, who need our help, racists and white supremacists. It is hard to treat the symptoms, so what the author is suggesting is we kick the ball down the road to corporations and let them, instead of treating the disease, silence the patients who exhibit symptoms. What an absurd proposal with no regards to human dignity and equality. Third, I'm greatly disturbed that after all this time, not a single mainstream media has actually directly address the El Paso lunatic's manifesto. They are all misinterpreting the young man who killed dozens of people to suit their own political ideology. It's amazing to me that we look down authoritarian regimes for censorship yet Google censors the full text of the manifesto. This is an opportunity for people in power to publically address brewing negative sentiments that half of our country current possess. Silencing these voices serve short term benefits but disastrous long term stability and harmony for our nation.
Peter Blau (NY Metro)
Is Mr. Taplin editing the comments here? The selection of comments seems to be lopsided in his favor. In any case... I find it quite unusual and distressing that someone who has grown wealthy and influential under the special freedoms we grant to the entertainment industry would now attack those same freedoms when they apply new media industries. From its very start, the LA film colony came under attack from reactionaries who -- with scan evidence -- accused the movies of spreading Communism, immorality and crime. More recently, those attacks extended to pop music and video games. Today, simply because they believe in their hearts that the internet companies areevil, people are making similarly unsubstantiated claims against the social media. This time it does not come from the Bible Belt, but from folks who call themselves progressives.
Easy Goer (Louisiana)
@Peter Blau I challenge that assertion, absolutely. I watched the "Evening News" on CBS 1 night. I counted 14 advertisements for drugs; both OTC and RX (over the counter and prescription) in a 30 minute period. It is now the "Evening Pharmaceutical Report", with sound bites of "news" in between. It reminds me of the guy that said "I went to a fight and a hockey game broke out". Superpacs have near identical M.O.s.
Peter Blau (NY Metro)
@Easy Goer What assertion are you challenging? Re: network TV, they are — and always have been — free to accept ads from anyone who doesn’t promote an illegal or otherwise restricted product. At least those network drug ads are approved by the FDA. You might notice that the local stations, TV and radio, are currently running lots of ads that are far sleazier: supplements claiming to cure Alzheimer’s, investment schemes that promise 12% annual returns, cancer doctors who hawk cures for terminal patients, etc. It’s like the Wild West in local broadcasting these days - and I’ve never heard of a station being sued.
Easy Goer (Louisiana)
@Peter Blau I meant more generally. I agree with you, especially the sleazier TV ads made which claim to cure _____ "fill in the blank". My thing is, I grew up in an era where there were no pharmaceutical advertisements on TV, and iut was so much better than today. Look where Big Pharma is now. Those ads are to make money, not "help" people. I have had the best and worst of both extremes: A brilliant health insurance plan (which cost a fortune) that covered almost 100% of everything, which I had for over 25 years; then no health insurance for 3 years, and (as of 3 months ago) Medicare which is the other extreme (along with no health insurance). Having experienced all that, I have to believe in a more socialistic medical plan for people. Like Canada and Great Britain. None are perfect; they have good and bad points. What we have in the US is certainly not the best for everyone; it is best for a select few (which I used to be).
A. Stanton (Dallas, TX)
II were a user of Facebook -- which I am not -- and came across something objectionable up there, I'd be tempted to mail it or e-mail it to Mr. Zuckerberg's parents.
Easy Goer (Louisiana)
@A. Stanton Wow. I believe in freedom of speech, absolutely. I don't agree with everyone, but this is exactly what sets America apart from the rest of the world: Diversity and Tolerance; outstanding ideals. I practice, however, they are almost gone since Trump came into office. He is tearing apart almost everything good we used to do and be. This solipsistic lump of meat and bones is a far cry from the man who preceded him (and every other president). Trump is closer to, well, take your pick: Kim Jong-un, Vladimir Putin, Rodrigo Duterte, Nicolas Maduro and Idi Amin.
Khal Spencer (Los Alamos, NM)
There is no clear definition of "toxic video content" here, leaving it to the imagination of he or she who is offended. The courts have clearly held what is unconstitutional. Other than that, we do have a vexing problem of upholding the 1A while trying to keep the worst offenders from going over the line. The FCC analogy to TV is invalid. The internet is, and always was, a freewheeling platform unlike TV. It should stay that way. Unfortunately, that leaves us tougher ways of fighting hate and ignorance than having your or my thought police on the various platforms.
David Salahi (Laguna Niguel, CA)
@Khal Spencer Mr. Taplin did indeed provide some examples of what he would deem toxic content: "I believe we can all agree that mass murder, faked videos and pornography should not be broadcast — not by cable news providers, and certainly not by Facebook and YouTube. " I don't see why the TV analogy is invalid. You say: "The internet is, and always was, a freewheeling platform unlike TV. " Just because something is a certain way that doesn't mean, in and of itself, that it should stay that way.
Max Davies (Irvine, CA)
If Facebook or Google had to spend 50% of their profits policing content to avoid being sued, they wouldn't go out of business. Their value would be diminished, but what remained would not be tossed away. Incentivized by that loss of value, they'd find cheaper ways to protect themselves and gradually recover. Small internet content distributors might not be able to adapt, but the Safe Harbor laws could allow exemptions for them. Safe Harbor laws were passed to create the environment in which a young and fragile industry could develop, and they have done that superbly. But businesses with market capitalizations in the hundreds of billions don't need those protections and they must be withdrawn.
Joel (Oregon)
@Max Davies If it requires spending half of Google or Facebook's revenue to comply with government mandate you have effectively mandated monopolies that nobody can ever break. You have killed what little competition these sites have, ensured everyone is locked into using them forever, and made it that much easier to censor and monitor people for dissent because you've gathered them all into one place. In short you've turned the US into China.
michaelscody (Niagara Falls NY)
My local supermarket has a bulletin board where people post ads, notes, and the like. If the manager sees something offensive, he removes it without notifying the poster. By this author's reasoning, if someone posts a notice for a Klan meeting and the manager does not see, recognize, and remove it the supermarket is liable for any harm coming from that meeting. I think not.
alex (dc)
sorry but revoking that provision would likely mean the death of the internet as we know it. platforms like facebook and youtube might survive but smaller ones certainly won’t; being liable for the content that is posted is too much of a legal burden for most. be very careful of what you wish for.
TrumpTheStain (Boston)
And thats a loss how?
Andreas (Atlanta, GA)
If you can't or won't remove content that is clearly illegal then you have no right to exist as a service provider. None! We'll talk about the borderline content when we get to that point.
Stephen Merritt (Gainesville)
Although Mr. Taplin is right that there is content that needs to be blocked as an issue of safety, he's too sanguine over censorship. There are already countries where internet safety/decency laws are being used to attack opponents of the government. Does Mr. Taplin think that the current administration would fail to do likewise if they had the means? They're already sending signals that they're thinking in that direction, even without a change in the law; Donald Trump seems to imagine that he can act purely through existing presidential power. I can't think of a good solution to this problem. I hope someone else can.
Margaret (Oakland)
The author is not advocating government censorship. Broadcast and cable TV have reporters, commentators, pundits, experts and Joe and Jane on the street — these are analogous in some ways to user-generated content. These traditional media companies are able to review the content and avoid content that could get them sued (eg liable, slander, invasion of privacy, false light, intentional infliction of emotional distress or any other basis for a lawsuit under existing law). That’s what social media companies have been given a pass from doing, and that’s what the author is saying they should no longer be given a pass from. The content review process isn’t rocket science; traditional media companies review content as part of what they do every day.
WestCoastBestCoast (D.E.I.)
@Margaret "The author is not advocating government censorship. " They're advocating changing the Communications Decency Act to allow the government to punish online platforms for what their users say. That is *exactly* government censorship!
michaelscody (Niagara Falls NY)
@Margaret "Congress must revisit the Safe Harbor statutes so that active intermediaries are held legally responsible for the content on their sites." That sounds like government censorship to me, how else do you interpret it?
Robert M (Washington, DC)
The author equates the FCC banning offensive content from broadcast television 75s years ago, and simply states there's no reason why we can't do the same thing with social media platforms with hundreds of millions of videos and channels uploaded daily. He's equating the FCC being able to regulate offensive content on broadcast television with social media platforms where hundreds of millions, billions of videos, text posts, are uploaded every second of every day. So he says we need to change safe harbor laws so that social media platforms are held accountable the same way broadcast television networks are. It's easy to say we need to stop these videos now, and banning toxic content must be the highest priority at social media platforms - but the logistics of enforcing and holding platforms like this accountable for hundreds of millions of videos and text posts their users post is insane. Unless the U.S. wants the same kind of closed internet China has, I'm afraid we're stuck with accepting there will be dark places on the internet, however disgusting they may be, if we still want to enjoy our 1st amendment rights to free speech. The author conveniently does not offer any insight how we would go about actually doing this, because there is no easy answer.
Margaret (Oakland)
The author is not advocating government censorship. Broadcast and cable TV have reporters, commentators, pundits, experts and Joe and Jane on the street — these are analogous in some ways to user-generated content. These traditional media companies are able to review the content and avoid content that could get them sued (eg liable, slander, invasion of privacy, false light, intentional infliction of emotional distress or any other basis for a lawsuit under existing law). That’s what social media companies have been given a pass from doing, and that’s what the author is saying they should no longer be given a pass from. The content review process isn’t rocket science; traditional media companies review content as part of what they do every day.
WestCoastBestCoast (D.E.I.)
@Margaret Since you are posting the same thing over and over, I will do likewise with my reply. ------------------------- "The author is not advocating government censorship. " They're advocating changing the Communications Decency Act to allow the government to punish online platforms for what their users say. That is *exactly* government censorship!
BeenThere (USA)
No, it's not. It's not about government enforcement. It's about injured private parties being able to sue. The way they have always been able to sue everyone, for example for defamation, except for these tech companies to which Congress granted immunity.
Mac (Boston, MA)
The idea that social media platforms should be liable for what people post is absurd. They provide a tool, people use that tool in whatever way they see fit. If someone is beaten to death with a hammer, do we blame the hammer company? Of course not, we blame the person misusing the tool. The same should be true online.
Todd (Providence RI)
@Mac What if the hammer company specifically designed their hammers to not accrue fingerprints and allow perpetrators of "hammer murders" to act anonymously and with impunity? Would the hammer company bear any responsibility then?
Mac (Boston, MA)
@Todd Anonymity when using sites like Facebook and Reddit is a myth. As we have seen from all the data scandals lately, everything you do is tracked. Now don't get me wrong, there are tools that one can use to obscure their identity (VPNs, TOR browsers, etc). Using those tools is akin to the hammer-murderer buying and pair of gloves and a ski-mask. Do we blame the hammer company for that? Again, the answer is no.
C. Bernard (Florida)
@Todd We already have radar detectors, bump stocks and bongs.
RG (NY)
It's worth noting that there is a potential basis in constitutional law for barring white supremacist postings on Facebook etc. as a matter of law. There's a strong argument that the clear and present danger standard, established by Supreme Court Justice Oliver Wendell homes in a seminal decision, is applicable. The demonstrated power of internet postings to inspire right wing extremist violence arguably constitues a clear and present danger, as much as shouting fire in a crowded theater, the example Holmes gave of such a danger.
Dave (MA)
@RG The "clear and present danger" standard was overruled in Brandenburg v. Ohio (1969) in favor of the standard of "imminent lawless action". Wikipedia has a good article on it. This makes prosecuting anything other than direct incitement to violence illegal.
Jagdar (Florida)
@Dave Yes SCOTUS receded from the WWI cases, like Schenk's "clear and present danger." But even the Brandenburg test permits some retrictons, if (1) the speech is directed to inciting or producing imminent lawless action; and (2) the speech is “likely to incite or produce such action.” Granted, this is a high bar to meet, which is good. It will safeguard against overbroad censureship. I suspect some of the speech on 8chan would meet the Brandenburg test. We need to amend the Safe Harbor provision to penalize providers who do not take action against speech that is dangerous, because it does meet the strict Brandenburg test. That is a narrowly tailored test, and should not encumber providers or chill speech that is protected - even hate speech.
Chris (Austin, TX)
I don't know why you seek to give 8chan more credit than it deserves by lumping it together with reputable sites like Reddit, Facebook, and Youtube. Mainstream media has made the El Paso shooting into the best form of marketing possible for the site. On top of that you suggest dismantling a law that is fundamental to the protection of the first amendment in the modern era. Here's the thing, these ideas - as hateful and repugnant as they might be - are out there. Better to have them out in the open where we can combat them freely than to force them underground where there is no hope of escaping the echo chamber.
Cathy (Hopewell Jct NY)
We have unleashed a dragon, and we are not going to be able to stuff it back into the magic cage that once held it. The central theme of the tale of Pandora is that once you open the box, it stays open. That means that we are going to have to fight the toxicity of the instantaneous spread of propaganda, psy-ops, ideological screeds, on some other scale beyond trying to squash each post as soon as it happens. That is like trying to play whack a mole with one mallet on a table with millions of moles. And of course, we have the problem of determining what gets eliminated - will it be Islamic screeds, and anything Rush Limbaugh utters? Will it be articles on anti-vaccination? Will it be scholarly screeds that deny climate change or promote the idea? Sites that promote obvious violence are easy to identify; sites which promote hatred, but leave the solution to the lone wolf are not. Let's face it - there will be no single, easy solution.
Jay Orchard (Miami Beach)
It's been said before and needs to be said again: There needs to be a law that makes it a crime to publicly threaten violence or urge violence against individuals or groups, on social media, or otherwise, in the same way that it is a crime for an aircraft passenger to make a comment about a bomb on board or for someone to threaten the life of the President.
Flemming (Vienna)
@Jay Orchard Im not sure if you are being sarcastic. Threatening and inciting violence are already crimes.
BeenThere (USA)
They're already crimes but 230 makes tho tech companies immune.
Tom (Reality)
Wow, this article is certainly an eye opener. I guess he has no idea what "user generated content" is and has a walled garden for the internet. Also, the world should apparently have no moral risk to anyone, as the choices have been made for them.
Margo (Atlanta)
Freedom of speech needs to remain protected. However, that should not exempt the writer or speaker from being put on a "list" maintained by NSA or other government agencies. I've seen some submissions advocating somewhat uncivil behavior and rather than engage or attempt to dissuade, I hope their authors have been placed on a watch list. These are the ones who should have their anonymity revoked and answer to a judge in a courtroom.
Kev (Sundiego)
It’s ironic that a media outlet that is perhaps the most ardent supporter of its constitutional right to the freedom of press, who challenges these freedoms constantly by blurring the lines between press and propaganda, says that we should limit others first amendment rights of free speech by banning those they disagree with online.
D.F. Koelling (CT)
@Kev I agree that the safe harbor laws need to be there and that websites should not have to police their sites for content, especially sites like Reddit and Youtube that are largely user-driven in so many unique ways. So, this guy's opinion is misguided at best and possibly vested at worst, but it is an Opinion piece, not an editorial position.
edtownes (kings co.)
@D.F. Koelling I think you and the O.P. miss the point. I'm sure for religious people, something like "There is only one God and we must worship Him" is "all she wrote." For most of us, NO ONE VALUE or virtue commands that kind of obeisance! Methinks that you would change your tune were the person you most loved killed by the kind of unhinged individuals who ... a) have far easier access to firearms than common sense would suggest; and b) are far more likely to find a community (these days) of individuals who share their manias (whatever they are) and amplify them and lead on a lengthening list of occasions to mass shootings. There is - I agree with you - no one solution, but just as the government "encouraging" people to bring their own bags when they go shopping, I believe the government SHOULD put its oar in in terms of curbing calls to murder some of one's fellow human beings. Yes, it IS a very tricky business, but "doing nothing" has the most awful consequences - the occasional child being murdered or maimed. We're not talking about trying to render life without ANY danger - or limits on discourse that are akin to "big brother prescribing one's every thought." We simply need some checks and balances that do not now exist. Yes, the world WAS a better place before it became possible to know that a million people will read WHY YOU SHOT 100 PEOPLE within 24 hours of your having done such an odious thing. It's hard to put any genie back in the bottle, but we MUST TRY!
Anglican (Chicago)
Think it's censorship? It's merely holding these info-delivering companies to the same standard we've long held TV, radio, and newspapers. They had to hire editors; why should Facebook get to reap the profits without bearing similar responsibility? Companies like FB have enormous wealth and can well afford to both hire eyes and write code. They just don't want to. Much like Uber wants to compete with taxis without being regulated the same way.
Margaret (Oakland)
Totally agreed - great comment, thanks.
SWLibrarian (Texas)
@Anglican, You watch. In the next couple of days the agitator-in-chief will accuse every media outlet other than Fox of manipulating information and worthy of being silenced. We must take care in doing anything related to media to ensure we do not provide a backdoor to neo-cons and neo-fascists to silence critics of their hatred and racism. Trump himself should be silenced on Twitter where many of his comments violate the rules of the platform. I'm not expecting that to happen any time soon.
Ian (Netherlands)
I find the comparison of a tech company to a broadcast company absurd and ignorant. Policing a few TV channel’s content (lots of which is repeated and already vetted) is not comparable in the slightest to policing the hurricane/tsunami/flood/whatever amount of content which is generated every. single. second. by the world’s social media users. It would force these companies to take a highly defensive status that results in a lot of accidental censorship. I loathe most of these tech companies but in holding them accountable, we don’t want to make them the gate keepers to the worlds communication any more than they already are. It also raises the question of what future issues are we setting ourselves up for. Not that long ago, the topic of homosexuality (to give one example) was considered as taboo as pornography. Are we really willing to let tech companies be the ones who make this type of decision for our society? And this doesn’t even begin to touch the issue of privacy and security which content scanning would severely undermine. WhatsApp was used extensively to spread misinformation before the Brazilian election; if we hold WhatsApp responsible for every message sent (again, hardly feasible), are we OK with them reading and processing those messages? Because they will, they are tech companies. Personally, I’m not. I agree we need some solution to help prevent these massacres but this solution is an awful one.
Michele (Cleveland, OH)
Won't regulation always lag behind terrible practices by corporations? A corporation's goal is to maximize it's value to shareholders. Social good is not part of that equation. It is the responsibility of governments to regulate corporations and curb the worst abuses. Clearly, our Congress in this nation is an utter failure currently. The laws that worked when the internet was in its infancy are no longer sufficient. But I suspect we will have to leave it to the EU to make good decisions. But again, I suspect, no, I know, that as long as Donald Trump occupies the White House no improvements will be made. Rational discussion and leadership are needed starting immediately to get the nation moving. Trump can't provide either.
Margaret (Oakland)
Agreed - the regulation has lagged here and needs the update the author has identified.
Josh Hill (New London)
Such is the impracticality of identifying all violent speech that sites would become ultraconservative and vacuum up everything that runs the risk of being controversial. Comparing the Internet to the broadcast media is disingenuous (and you have to be first person I've ever seen who has called Fox responsible). As the Internet is now our primary medium of communication, what you are suggesting is the equivalent of imposing censorship on printing presses and meetings, something we moved beyond several hundred years ago. And while Facebook may have decided that pornography isn't desirable on their platform, there is abundant freedom, thanks to the law, for sites that do provide pornography for those who wish to see it. Liberty has a cost, but the cost of losing it is higher.
Margaret (Oakland)
The author is not advocating government censorship.
RT (Colorado)
Google makes money from hate, violence and worse on YouTube and search. They also act as a censor for any industry or website that threatens their market share or gets an appealing amount of traffic. How many inches has Google blundered into because they saw some easy money there? Too many! Why does Google suppress the results of single word websites, like candy.com or millions of others? Because showing them at the top of the search results would reduce clicks on paid results and lower their income. Google isn’t public enemy number one, but it’s near the top. Break it up.
John K (New York City)
The problem with the current state of affairs is that there is no accountability for publishing falsehoods or hate. There is no accountability because authorship is not clear. Authorship is not clear because it is too easy to be anonymous. If everything were traceable, a big chunk of this problem would be solved.
Christian (San Francisco)
A truly awful idea and horrifying concept. Freedom of speech means freedom of speech. Period. With extremely narrow exceptions for child pornography, yelling fire in a theater, defamation, and obscenity--which exception is largely passé—our country is founded on the principle that censorship, prior restraint and the chilling of free speech our antithetical to our values. The writer’s proposal, if adopted, will extinguish that concept. Once we label certain speech off limits (which would be the practical result of platforms being open to litigation of all sorts), all speech is fair game. Is criticism of Christianity “hate speech,” —perhaps—someone who does not like Christians flips out and commits a crime—and Richard Dawkins or Bill Maher and any platform that hosts them gets sued. Maybe they win, but it’s after spending millions. So what does any smart platform do, bar any speech that could be remotely controversial. Crazy people will always do crazy things. We had massacres and mass slaughter long before the internet, and the key to defeating it, is more speech, not less.
Xiong Chiamiov (USA)
@Christian One parallel we can look at is the banking industry. They are heavily regulated in the way the author suggests, and accordingly they (usually) avoid any business that might be a problem for them - it's just not worth the cost. We see this play out with metal marijuana businesses that can't get bank accounts, which is one of the many hurdles that is driving them back into the black market.
BeenThere (USA)
Defamation is illegal in theory but in practice 230 can make it impossible for the defamed to get any legal relief. Unless you can identify, find & compel the poster to take down the defamation, the tech companies (platforms & search engines) have no obligation to remove it. It's through them that the defamation is available to the world. The poster may quite literally be in hiding -- or dead, or out of the country. Should the law then take from the defamed person any possibility of relief?
Porter (Sarasota, Florida)
Just as the act of "yelling fire in a crowded theatre" is not in any way protected by the First Amendment, the existence of sewers like 8Chan and Reddit (so glad you mention that disaster of a site) that encourage hateful and destructive words that lead to hateful and destructive actions is not, should not, be covered by the First Amendment. My personal view is that Reddit and 8Chan should be shut down as dangers to the civil order and to the intentions of our Founders and that regulation must finally come to Twitter and Facebook. Self-regulation clearly doesn't work; it's beyond time to take action against the purveyors of hate.
As-I-Seeit (Albuquerque)
Let's do this! Change the law and require responsibility of these online PUBLISHERS!
Chris Manjaro (Ny Ny)
This opinion piece represents an extremely dangerous threat to free speech but the basic concept is so wrong as to be laughable: "Some may argue that deciding what counts as toxic video content is a slippery slope toward censorship. However, for the past 75 years, since the first television broadcasts, the Federal Communications Commission has been able to regulate offensive content on television. I believe we can all agree that mass murder, faked videos and pornography should not be broadcast — not by cable news providers, and certainly not by Facebook and YouTube. Since broadcasters do not have the protection of “safe harbor,” they engage in a certain level of self-regulation, to avoid being sued." There is no comparison between social media and TV networks for one obvious reason: TV networks don't air content provided by TV viewers and there is no system for viewers to upload content onto the networks. The idea of eliminating these safe harbor rules would be the effective end of social media and basically any form of interpersonal communication which occurs over the internet such as texting and email. What if 2 people plan a murder while sitting in a public park? Should victims be allowed to to sue the city which ran the park? This whole idea is a crazy, dangerously reactionary proposal.
Margaret (Oakland)
Broadcast and cable TV have reporters, commentators, pundits, experts and Joe and Jane on the street — these are analogous in some ways to user-generated content. These traditional media companies are able to review the content and avoid content that could get them sued (eg liable, slander, invasion of privacy, false light, intentional infliction of emotional distress or any other basis for a lawsuit under existing law). That’s what social media companies have been given a pass from doing, and that’s what the author is saying they should no longer be given a pass from. The content review process isn’t rocket science; traditional media companies review content as part of what they do every day.
Michael Sheeran (Albany, NY)
@Chris Manjaro I would bet the effective end of social media would be seen by the author as a good thing.
Patrick (Ithaca, NY)
With Internet neutrality already thrown into the gutter, should we trade the still relatively free internet that we still have to become as censored as China's? Oh sure, the motivation now may be to prevent the promotion of "extremist" views and ideas, but once down this slope, will legitimate criticism of the government be next? I don't think so. We must accept the world as it is, with all the good, the bad and the ugly that it contains. Artificially sanitizing content may feel good, but it stinks as much, if not worse, than that which we're trying to protect ourselves against. No thank you. Keep the safe harbor law exactly as it is.
Margaret (Oakland)
Traditional media companies are held legally responsible (they can be sued) for what they publish. When the internet was an infant, Congress passed an exception to this for internet companies, which were viewed as tiny little babies in need of protection. (The exemption is in section 230 of the Communications Decency Act.) Fast forward to today: Facebook, Google, Instagram (Facebook), Twitter... not tiny little babies in need of protection. No, they are gargantuan behemoths in need of serious regulation. Solution: get rid of the exemption in the Communications Decency Act; hold social media to the same standards of responsibility for what is published on their platforms as traditional media. The playing field has long since tilted to social media’s great advantage. Even it back out: repeal the exemption in the Communications Decency Act. Looking at you, Congress!
Tim H (California)
If we banned anonymous commentary on social media sites, a lot of this would go away. Does the first amendment right provide the right to remain anonymous?
Matt (New Jersey)
@Tim H - Yes, it does. Anonymous communications have an important place in our political and social discourse. The Supreme Court has ruled repeatedly that the right to anonymous free speech is protected by the First Amendment. A frequently cited 1995 Supreme Court ruling in McIntyre v. Ohio Elections Commission reads: Anonymity is a shield from the tyranny of the majority. . . . It thus exemplifies the purpose behind the Bill of Rights and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society.
Michael Sheeran (Albany, NY)
@Tim H I would say close to it. When we cast our votes in an election they are all but anonymous.
Paula (Chicago)
Hate is not something easily regulated. Speech is not something easily regulated. Venting emotions/beliefs through speech may even prevent action at times although the current isolation in our society makes it more likely that hate speech froths people up into action. I would rather know who is espousing the hate speech and intervene than try to prevent it from being shared "out in the open." How do we use these sites to identify potential risks instead of attempting the near impossible of preventing expression (and liability).
Margaret (Oakland)
Changing online discourse is not impossible. When online media companies are held accountable for the content they publish, they find ways to effectively filter out the things that can get them sued or cause them to lose advertisers. This is not impossible. Repeal the exemption from being sued and watch: social media companies will clear out a great deal of the problematic content.
Margaret (Oakland)
It’s not “near impossible” to review content and remove things that could get you sued (liable, slander, invasion of privacy, false light, intentional infliction of emotional distress or any other basis for legal action under existing law). Traditional media companies review content as part of what they do every day; it’s not rocket science. Repeal the exemption for social media companies and watch: online discourse will improve. It’s tucking social media into an existing framework. This is doable.
Seanathan (NY)
this article's argument is a bit ironic when taken with Taplin's book title. If we pass onerous regulation allowing the government to fine and people to sue internet companies for user-generated content, how are other web start ups ever going to compete with Facebook and Google? Their positions would be even further fortified and markets cornered. The dominance of US tech companies is thanks to the forgiving regulatory landscape they faced through the 80's personal computing revolution, the 90's internet boom and the last decade's smartphone explosion. Why is there no Airbus to Silicon Valley's "Boeings"? Should we make our industry less competitive because offensive content was hosted on a website for a few hours?
Margaret (Oakland)
The market is already cornered. A change in the landscape might actually shake the market back up and reinvigorate competition. Here, the change in the landscape is to stop protecting online social media companies from responsibility for the content published on their platforms. This change will create more jobs while it improves online discourse. The content review process isn’t rocket science; it’s what traditional media publishers have to do (newspapers, broadcast tv, cable tv). This is a situation that can be improved. This is a solution known to us and one with precedent in the traditional publishing context. To me, it’s a no-brainer. Repeal the exemption in section 230 of the Communications Decency Act. There will be sound and fury... and then social media companies will get it done under the new landscape. Think of when Europe changed its data privacy rules: sound, fury... then companies got down to adapting to the new landscape.
Margaret (Oakland)
It’s pretty simple, actually. The content review process isn’t rocket science; traditional media companies review content as part of what they do every day. No longer exempting social media companies from having to do the same would create jobs and shake up the market, making room for new players.
mike (Massachusetts)
"The safe harbor laws were created for what is known as passive (or neutral) intermediaries. Verizon, for example, is a passive intermediary platform: It makes no attempt to edit or alter the bits flowing through its fiber optic cables" So if reddit chooses to not block/alter any content, they won't be held accountable, but if they do block certain content, they will be punished if they don't block enough content? This logic makes no sense.
Margaret (Oakland)
Currently, Reddit can’t be sued for the content of what’s published on their site. This is because (unlike traditional media companies) federal law gave social media companies an exemption from being sued. If this exemption is now repealed—because social media companies are now huge, powerful and rich and no longer in need of special protection—then social media can be sued (e.g. for liable, slander, false light, invasion of privacy, intentional infliction of emotional distress... or any other basis for a lawsuit under existing law) just like traditional media companies can be sued. So social media companies will be incentivized to remove content from their platforms that could get them sued. It’s pretty simple, actually. The review process isn’t rocket science; traditional media companies review content as part of what they do. Making social media companies do so would likely create jobs and might even shake up the market and make room for new players.
mike (Massachusetts)
@Margaret If a social media site gets sued every time a user slanders a public figure, Twitter, Facebook, etc wouldn't be able to exist. Traditional media is held accountable because their content is created by people they hired, you can't expect a twitter user to be held to the same standard, it will just result in extremely over-aggressive censorship.
Michael Sheeran (Albany, NY)
@mike Thank you for pointing that out. Common sense tells us that if you close down the public forums then these defectives will gravitate to more private and insulated messaging platforms where they will never hear a dissenting opinion and we will have no warning until it's too late.
Jonathan (Oronoque)
This is simply not feasible. Faced with liability like this, these web sites would have to shut down. Anyone who wished to say something on the internet would have to buy a server, install Apache, and hook it up to the internet.
Margaret (Oakland)
It’s pretty simple, actually. The content review process isn’t rocket science; traditional media companies review content as part of what they do. Making social media companies do so would likely create jobs and might even shake up the market and make room for new players.
Jonathan (Oronoque)
@Margaret - The traditional media companies do not receive millions of submissions per hour. The cost of manually reviewing that volume of submissions would be incredibly high.
Gideon (NC, USA)
@Margaret "The content review process isn’t rocket science..." No, it isn't. It's much more difficult. With rockets, you know the weight, the distance, the target, and you can plan those things out. Social media content review involves a lot of subjective evaluations: This does not offend me, but will it offend others? Someone posts: "Boys are stupid, throw rocks at them." Is that a joke or a call for violence? Rocket scientists have it easy.
Larry (Union)
It would be nice to hold people accountable for a change. Seems like we have not had that in a long time.
MEM (Los Angeles)
Facebook and the other social platforms want it both ways: all the revenue from promulgating content and placing ads along with the content but no responsibility for the content. They weep about the 1st amendment and whine about the technical difficulties of keeping certain content off their platforms, but not because of principles but because of profits.
N. Cunningham (Canada)
@MEM.....and they profit obscenely. When FB can be fined $5 billion and shareholders actually buy in, pushing stock value higher, the firm can afford to hire more people for oversight; pay for better algorithms. They can, they just don’t want to.
Jason (TX)
In the real world, principals, no matter how altruistic aren't going to keep the lights on. I mean it seems strange to be upset that for profit corporations seek profit. Certainly there should be a balance to profit vs risk and safety, but if anyone knew exactly what that was they would probably be running facebook by now.
Brian (New England)
@MEM There is already an incentive for platforms to regulate their content. Advertisers are terrified of appearing next to offensive material. If youtube's or reddit's engineers could write algorithms to purge their websites of everything 'objectionable' they would have done so years ago. In addition the left's new urge to censor speech is very troubling. Today's political blasphemy may very well be tomorrow's progressive cause. If the internet had been invented 300 years ago, think of all the things people would have wanted to censor: support for gays, abolition of slavery, rebelling against the king George...
JMC (Lost and confused)
This is an excellent start. There is no reason to protect the enablers of hate. To really get to the bottom of the problem though, we have to re-think the whole concept of anonymity on the internet. The idea that you are anonymous on the internet is a farce to begin with. Law enforcement can easily know who you are as can Facebook, google and the hundreds of trackers on the internet. The only people we are anonymous to are private citizens who have no way to find or identifying their stalkers and attackers. Who benefits for the illusion of anonymity? Trolls and stalkers. Would they spew their hate if there was real world consequences? Would they act the same way if their family. neighbors and employers were aware of their sickness? Would they continue to harass if they could be sued and prosecuted? Just as the big companies should be liable for what they enable, so to should individuals. Free speech is not the same as anonymous speech. Anonymity is where the trolls hide. Every advertiser on the planet can find them, they are only hidden from their victims. Holding big companies liable combined with getting rid of anonymity will solve 95% of the internet's harassment and trolling problems.
Xiong Chiamiov (USA)
@JMC > Free speech is not the same as anonymous speech. True anonymity is required for free speech. As you note, most of the systems we're talking about here are far from anonymous, since the companies track you and so can the government easily. That's not to say we should give up on the idea though - there are systems like Tor and Freenet that provide more substantial anonymity and are therefore critical tools for fighting repressive regimes. A desire for there to be truly free speech havens online does not mean every place needs or should meet that criteria, but not wanting large popular sites to allow "undesirable content" also doesn't necessitate shutting down the ability for it everywhere, either.
Josh Hill (New London)
@JMC While I understand your concerns, it is actually fairly easy for someone to remain genuinely anonymous on the Internet if they choose to do so, using readily-available tools like the Tor Network. What you're describing is the knowledge advertisers gain about the casual user, not the user who is serous about keeping his identity secret. In my experience, anyway, the problem is not so much anonymity as the failure of some website operators to engage in effective moderation. When I reported a number of offensive YouTube comments, for example, they were back up a few hours later. If a site doesn't care, its comments section will turn into a pigsty.
Jason (TX)
This is somewhat false. It can still be fairly easy to remain anonymous if you are careful and use the TOR network. The issue with that is you have to never make any mistakes, but certainly there are people that successfully use it everyday to remain anonymous. I remember reading that our CIA spies use TOR to maintain cover on clandestine assignments.
Steve (NY, NY)
Mr. Taplin has been an insightful and informed voice, fighting for these issues for many years. He is 100% correct that the safe-harbor status is a terrible law that causes real harm, and removes incentive for sites to care about anything but profits. Newspapers and TV networks must be legally responsible, and it has never hurt free speech. Reading the other comments here is incredibly depressing, because it is clear that the tech companies have done an amazing job of brainwashing the public on this issue.
Mobocracy (Minneapolis)
So we basically need to protect the ignorant from themselves? I don't think most social media platforms themselves are so much the problem as it is the users who use them. By and large the American public via social media demonstrates itself to be ignorant, gullible and willing to believe nearly anything that fits its biases. To the extent that social media platforms are to blame, it's to the extent that they weaponize people's ignorance and gullibility. I'm not sure this is devious choice by the likes of Mark Zuckerberg or just the logical outcome of algorithms that measure what people want and give them more of it. I also think that lumping Reddit into Facebook, Twitter or niche websites like 4/8Chan is a mistake. Reddit is incredibly diverse -- there are many highly moderated subforums on Reddit with a near-academic level of contribution required to even post, and the moderation is generally subforum specific. There are a ton of niche forums dedicated solely to the topic at hand, and even if nearly unmoderated there's often no political content of any kind. Facebook and Twitter are megaphone sites, with the former highly manipulated to magnify controversy for engagement as well as commercial manipulation. 4/8chan is kind of hard to take seriously, and I doubt that there's any way to stop people with fringe ideas from congregating as long as the internet itself exists.
Margo (Atlanta)
The problem is 4/8chan is being taken seriously by a number of people.
willie currie (johannesburg)
For Matt’s position to be consistent he would need to support the free speech of jihadis and pornographers. Otherwise what he argues is contradictory.
Matt (New Jersey)
@willie currie - "I disapprove of what you say, but I will defend to the death your right to say it" -Evelyn Beatrice Hall Government censorship of speech is a very slippery slope. Be careful what you wish for.
willie currie (johannesburg)
@Matt Try telling that to someone as automatic gunfire passes through them. That’s what it really comes down to. Unfortunately.
Matt (New Jersey)
@willie currie - I got news for you, destroying the internet isn't going to stop gun violence. Quite the opposite in fact when you have tens of thousands that will be out of work.
Xiong Chiamiov (USA)
Without the safe harbor clause there would've never been a web 2.0: no MySpace, no Facebook, no Twitter, no Flickr, no Deviant Art, no Last.fm, no Reddit, and on and on. Any website that allows users to post things (including this comment section) could not financially exist. There are plenty of problems we've seen come from the increased democratization of the internet, sure, but getting rid of it entirely seems an awful poor idea.
tj (Boston)
@Xiong Chiamiov Is that necessarily a bad thing?
Texan (USA)
Tech companies do not create the animosity or vile thoughts people have towards each other. Counterintuitively the sites in question can be important in stopping tragedies before they happen. Undercover work has always been important to government agencies. Now undercover is on-line. The source of these terrible messages can be tracked down, or their messages exposed as a fool's trope.
Lost I America (Illinois)
We already know everybody online. We are all stars in our own ‘Truman Show’. Soon we all will be put on trial and locked up. Release all data and lets get it on.
Barbara Franklin (Morristown NJ)
“I know it when I see it.” Supreme Court Justice Potter Stewart The problem quite simply is Facebook/YouTube and Google are too big to monitor the billions of clips coming in each day. The NY Times and other media can. I agree, they are not simply funnels - they either staff it properly - AI ain’t good enough today - or cut it down to a size that can be managed.
Ann (Michigan)
This piece was convincing to me until faked videos of Nancy Pelosi were put alongside Christchurch shooting videos. These are not nearly on a par, and this seems an intractable problem with tech companies arbitrating speech. Are they doing to decide upon and regulate parody now? I don't have any kind of an answer, but I do think we need to be very cautious in our zeal to rid the world of dangerous speech. Even the FCC, as cited here, does not regulate parody. Its mandate is very constricted. And can tech company algorithms, or even a small staff, tell the difference between parody and outright lies, because the difference is a high-level recognition of tone, impossible for a machine to recognize.
merc (east amherst, ny)
How about throwing Twitter in there as well? How about at least a 12 hour delay before a Twitter post can be seen in public, thus providing Twitter the time to fact-check what was just proposed. There is no difference between someone stating something on Twitter, all the while knowing it will get instant credence, and someone yelling out "FIRE" in a movie theatre.
Xiong Chiamiov (USA)
@merc A 12-hour delay would make Twitter useless, as the entire point of it is for breaking news and ideas to emerge in a way that could not happen with such a delay. Even aside from that, requiring human review of every tweet would immediately bankrupt the company, and any other you applied it to.
merc (east amherst, ny)
@Xiong Chiamiov Then so be it. Who is Twitter to have put in place a platform that allows data to flood our information highways that could be, and in many instances has been proven to be, false. And Twitter was designed for the transmission of "Breaking News"? Please. We have thousnads upon thousands of e-screens in place throughout the world operating within guidelines established for responsible truth-telling, for the dissemination of news in a way so much more responsible than the unidentifiable, anonymous keystroke means your touting. Twitter's paradigm needs to be redesigned to include a system of checks and balances to prevent our information from proceeding down a disinformation highway. Twitter's become a vehicle whose wheels have become squared and the ride once envisioned impossible. Twitter as it stands requires a 'perfect world' scenario. And that will just never be. The End.
Zachary Robinson (Portland, OR)
A “12 hour delay so Twitter can fact-check the post” is an utterly laughable concept for a number of reasons. The logistics don’t make any sense. No less than five hundred million tweets are sent on Twitter every day. Let’s assume that it takes one minute to fact-check each tweet. If Twitter filled an entire skyscraper with an army of 10,000 experienced fact-checkers working 24/7 with no breaks, it would take over a month to fact-check one day’s worth of tweets. You’d need an entire small city of fact-checkers—at least 300,000—to even keep up. Additionally, what about tweets about people’s personal lives, which form a good chunk of Twitter? If someone tweets “My flight just got delayed,” should Twitter require an airline confirmation email to be attached to the tweet and a photo of the airport departures board? We can’t have people spreading fake news about JetBlue’s reliability, after all. And what about personal opinion? If someone tweets “my ex-girlfriend is a horrible person,” but said ex-girlfriend actually volunteers at animal shelters and often helps elderly people cross the road, should Twitter remove the tweet? What about satire or jokes? What about tweets from emergency services regarding a natural disaster, or even just from a company about their website being down, that might not be able to wait twelve hours for fact-checking that there is indeed a wildfire and people should be evacuating their homes? It’s ridiculous.
Elizabeth (Indiana)
It is high time that 230 be clarified to apply only to passive intermediaries and not content providers, like social media. However, it should be done in a way that still allows posting without prior censorship. Here are ideas for ways to do this that could help bring us back to the days before trolls and scammers ruined the internet: Put a threshold on content shares/likes/retweets/etc. under which the social media company does not have liability. Once a post has more than 500 views or whatever, it is considered "published" by that company. Maybe make an exception for posts that are flagged early. This would reduce the burden on having to monitor so much content. Another idea - take away immunity from anonymous, untraceable, speech. Verified accounts or even pseudonym accounts that can be traced by the company to a specific person are responsible for their own posts and liable like any other publisher. Anonymous posts are the responsibility of the social media company.
BeenThere (USA)
@Elizabeth -- Those are two really good ideas. I'm curious if they're original to you, & wondering if there is somewhere (besides these comments!) where serious proposals for how to amend the Communications Decency Act are proposed & discussed. I know Senator Mark Warner has put out a white paper. This issue is personal to me. We are being defamed by someone we have a court order against who is quite literally in hiding. He is defaming not only the large number of people he is mad at, but their children & other relations. So a google search of the children will turn up these vile fabrications. The California Supreme Court recently interpreted the Communications Decency Act in a case involving a Yelp review which was held to be defamatory (not a difference-of-opinion situation, but entirely false facts). The court held that only the poster could be held responsible & made to take it down. If you can't find that person or compel them to comply -- well, Yelp fought for & won the right to keep the defamatory review up. The US Supreme Court declined to review. Tech companies don't even cooperate with identifying anonymous posters. We know who is defaming us, but without cooperation of tech companies it can be hard for victims of defamation to obtain evidence which will stand up in court. We don't want money from the tech companies, but at a bare minimum they should have to take down defamatory material & remove it from search results.
AnObserver (Upstate NY)
We need to realize that on the internet websites like 4chan and 8chan have been able to monetize hate. These hate filled websites generate millions of clicks. Each of those clicks represents advertising revenue to the owners of these websites. That's not speech, that's commerce.
Spook (Left Coast)
@AnObserver And those institutions and opinions are every bit as valid as any opposing ones. That's something the pro-censorship crowd can never understand, and why they should be fought at every turn.
Patrick (Ithaca, NY)
@AnObserver Given that ad clicks are the driving model of website funding, from the NY Times upon which we post this to, yes, even porn sites and other dark elements on the underside of the 'net, your point is noted but as it is not differentiated from any other website, changing the rules in this regard would affect a much larger group far beyond the sites you mention.
arthur b (new york)
Censorship won't work because it be a double edged sword. What will work to some degree is financial pressure on advertisers. But there is only one way to insure that our better angels are encouraged and hater's attitudes are brought into the mainstream and that starts at the top.
Speakin4Myself (OxfordPA)
Internet companies have claimed that they are not broadcasters, that they are merely a pipeline. In that case a better analogy would be to compare them with the postal service, which is regulated. Postal inspectors have enormous power under the law from content restraint to enforcement of fraud by mail and similar crimes. 1st class mail is generally intended to be a private communication between 2 or more people. As such it has 4th amendment protection. That said, postcards and bulk mail that can be read without opening are not protected from postal regulation. The trial of the James Joyce novel Ulysses ban was based on the postal codes. It is now all too clear, partly because of this newspaper's coverage of privacy issues, that the Internet is being used not simply as a pipeline for private communication, but as a worldwide broadcast medium for all sorts of content, whether good and bad. Further, the so called pipeline companies are reading are mail and saving the contents for their own purposes which are often at odds with our privacy and our best interests. To pretend that is the same as the context for networks through the years from CBS to Fox is absurd. There has never been a public interest licensing set of requirements for either Internet providers for Internet users. They do business in and with our lives. 'A well regulated militia' and well regulated businesses, please.
Melquiades (Athens, GA)
There is a much larger accountability here here, which the internet itself (and therefore the laws about it) are responsibility: online means anonymous. Many will react to that statement dismissively, but hold on-->online means anonymous. Before the internet (telecommunication, more broadly), communication was limited to contexts where the parties KNOW SOMETHING ABOUT THE PHYSICAL LOCATION OF WHERE THE OTHER PARTY CAN BE FOUND. Sure, a person could snail mail a letter with no return address, though generally that meant the recipient took it a lot less seriously. But think of it this way: Mr Smith has a business that produces some good or service. In the OLD days, that business had an address, and if a truck from that company ran over your dog deliberately and flicked you off while fleeing, you could find that address and if nothing else worked, go their and punch someone in the nose. Now (e.g. not just 8Chan, but also CloudFlare) can be virtually anonymous: you have a beef with some part of the system, it's 'virtually' impossible to find out WHO you wish to punch in the nose. I am not suggesting fisticuffs galore would solve this, but the POSSIBILITY of fisticuffs would definitely make the sources a little more circumspect about what they say. So the media's reporters and even more so the police can penetrate this anonymity somewhat. But for the average joe, the information age means relative anonymity for very powerful media and that means they can ignore the repercussions.
Gendun (Berlin)
Such arguments proceed from a rosy picture of the end result which is conjured in general terms, but the devil, as they say, is in the details. There are obvious first amendment issues at stake that are not trivial, and blithely suggesting that online platforms like Facebook can deal with them with technology ignores two critical points: 1) Facebook and YouTube are under constant intense criticism for their lapses, and have repeatedly stated how difficult and expensive this problem is to solve; and 2) what do you recommend for content providers who do not have many millions of dollars to invest in AI and armies of paid content screeners? "I believe we can all agree that mass murder, faked videos and pornography should not be broadcast — not by cable news providers, and certainly not by Facebook and YouTube." No, I do not agree with that at all. There are times when violent events can and should be allowed because of their historical value or their importance as news, and "pornography" is notoriously difficult to define. Certainly "faked videos" is as problematic or worse - how do you allow for nuance in identifying works of art or satire?
David (NJ)
Financial services firms are accountable to the government to having numerous money-laundering procedures in place when bringing "content" on-board and monitoring such content; not to stop criminality, but to assist with law enforcement. And failure to having procedures in place with active supervision has resulted in substantial monetary fines. No safe harbor law protection. Societies decide what are its benefits and what are its threats. Perhaps a political platform on how the citizenry feels about Safe Harbor laws specifically on the subject active content intermediaries is called for.
Medes (San Francisco)
This would just turn the internet into traditional media like TV and Magazines. I don't know anybody who would want that except those in old media who dislike it when the rabble talk to each other instead of tuning in (and paying for the "privilege"). Timothy McVeigh didn't need an image board to put him on the wrong path.
David (NJ)
@Medes Perhaps, but if online media existed during his time, fair conclusion that we'd have many Timothy McVeighs. Societies decide tolerance of threats. And if an individual feels differently about those parameters, he or she has a choice to stay or leave.
Gary (Australia)
To repeal or amend S.230 as it is would just make online services more akin to newspapers - which , of course, is what the 'old 'media want. To have the internet providers responsible to the fullest extent would require them to have an army of 'editors or sub-editors" to review everything before it was posted, which, in effect would remove the benefits of speed of interaction. Having said that, I'm not sure of a better way to enable users to enjoy the speed of interaction on the web but , at the same time, to prevent or remove 'hate speech'. So full marks to Mr. Taplin for making the effort!
Henry Edward Hardy (Somerville, Mass.)
Repealing Section 230 to make internet service providers responsible for the content of posts which traverse or reside on their networks *would* make Verizon and other common carriers vulnerable. The result would be universal, government-mandated censorship and prior restraint on all modes of electronic communication, including the almost-all-digital phone system. Is that what we want?
spiritplumber (san rafael)
@Henry Edward Hardy Hosting content is different than routing content, though. I don't think that lawmakers would have understood that in the 1990s, but today.... Ah who am I kidding, they've gone back to "the cyber" and blaming Mortal Kombat for shootings.
Ferdy (Earth)
@Henry Edward Hardy The article makes a distinction between information conduits (ISP) and curators like Youtube and Facebook. If an entity decides, for profit, to give up neutrality in the way it makes information available it shouldn't be protected by Section 230.
Rebecca (US)
@Matt I know from 40 years of experience that software developers can do much more to stop the extreme negative applications of these software systems. And the companies that make and sell this software can put in place policies that would limit the huge flow of hate running thru their "tools". It's not like a car or a hammer and I expect you know this. Yours is an old familiar attempt by people in tech companies to not take any responsibility for your powerful products that are having a profound negative effect on human communication.
Jeff Rose (Colorado)
No, actually we cannot all agree. Your proposal displays a naive lack of understanding of how difficult it is to filter such vile content without severely hindering valid discourse, and it opens the door to controlling speech that I would not be comfortable with. (See recent examples of bias in machine learning.) It would also create an undue burden for startups and new platforms that won’t have the resources of a Facebook or YouTube to either deploy machine learning or hire office parks of cheap labor. Let’s focus on restricting access to firearms and broadening children’s worldview via education before we start blaming platforms and limiting speech.
Alex (DC)
@Jeff Rose How difficult is it to filter "more dead kikes" or "more dead spics"? Both of these are strewn all over 8chan, and they seem pretty egregious to me... yet, no filter can find these? REALLY?
NeilG (Berkeley)
The First Amendment rules for defamation by newspapers provide a good model for content rules for online sites. Newspapers are liable for defamation only when they publish something with "actual malice", which includes not only publishing something they have reason to know is false, but also something they did not exercise appropriate diligence before publishing. Section 230 of the CDA has relieved web sites from due diligence. Repeal or replacement of that law would not be any further intrusion into legitimate free speech than print media suffer now. In the meanwhile, I have to laugh that we, the people, who may be subject to attacks provoked by domestic terrorist websites, currently have no way to limit those websites, but Johnson & Johnson does. I have to laugh because otherwise I would cringe in fear of what some lone wolf with an assault rifle can do, and even more what an organized group like the Proud Boys could do if they can continue to recruit and arouse hatred without limit on the internet.
Ann (Michigan)
@NeilG. It would be great if we could treat these social media sites as newspapers, but they are simply too vast. That is an argument for breaking them up and making them more human scale so that human judgment might prevail.
michele (syracuse)
One of my friends is photographer who creates art/fantasy based digital photos. She got banned from FaceBook when she advertised unicorn photos on the grounds that FB doesn't allow selling animals. So yeah, the algorithms need work...
AnObserver (Upstate NY)
@michele To an extent its to the advantage of Facebook to take this to absurd levels. Once you demonstrate environments that are impossible to regulate people stop asking. Its similar to the old web filtering software that schools were required to use. Web sites on breast cancer were blocked along with porn sites. None of these companies actually WANT to police this content. Their main desire is to create a facilitated exchange model with no boundaries whatsoever. Boundaries and management cost money. It is the Internet too, no reason that all the components can't be placed off shore out of reach too.
expat (Japan)
Give that the US has already begun extradition proceedings against Kim Dotcom for running a website that allows users to upload and download files of dubious provenance that represent a threat to the finances of Hollywood studios, it should be possible to do something to protect the lives of innocent people. Theoretically.
Matt (New Jersey)
As a software developer with a degree in computer science, I think this is one of the most dangerous ideas I think I have ever read. Section 230 of the CDA is the most important law in technology to ensure freedom of speech online. It is far more important than Net Neutrality (which would become a moot point) if CDA 230 was repealed. Instead of "holding internet companies accountable", you would essentially bankrupt all of them, as every site that allowed user interaction would be sued into oblivion... and not just Facebook and Reddit and all those sites you don't like, but thousands of blogs, and small companies too. You would make jobless tens of thousands of software developers and cripple the information age economy. Furthermore, software developers are not gods. The software we develop cannot control the actions of another person. If someone drives drunk and kills someone, do we hold Ford responsible? No. You hold the drunk driver responsible. It's unbelievably unfair to hold technologists responsible for someone else's words and actions for which we have no or little control over. For more information on the vital importance of CDA 230, I urge readers to visit and support the Electronic Frontier Foundation.
Nicola (Switzerland)
@Matt In comparison with the drunk driver, the solution, then, would be easy: a) no more anonymous content on social media sites, b) a limit for the internet consumption for every user, and c) an exam and registration before they can use certain parts of the internet. Because that‘s what we do with drivers: they have a license, they are registered with a license plate for their cars, and the amount of alcohol that they are allowed to consume is limited. I have heard that comparison many times, and generally, I agree: the USA do not have a gun problem. The millions of gun-owners are not the problem. The USA seem to have a culture that actually admires violence, if it is for a good cause at least, which translates for some into real violent actions.
Matt (New Jersey)
@Nicola - First off, the drunk driver analogy was just that. If someone bashed someone else's skull in with a hammer, do we hold the manufacturer of the hammer responsible when the tools it makes are misused? No. Any tool can be misused by those who have bad intentions. That includes social media sites. Those people need to be held accountable, not the tool maker who's work has been misappropriated. Secondly, the first amendment protects anonymous speech. Censorship is the tool of the autocrat. In a free society, the best way to counter bad ideas, is to engage with better ideas... which is exactly why I made this post to help bring a more informed perspective to Mr. Taplin's very misguided (but protected) ideas.
Eric May (Beaulieu-sur-Mer, France)
@Matt - You are absolutely correct. The business model of the tech giants will change when they are recognized as publishers, not platforms, and can be held liable for the damage they cause. The argument that platforms are neutral providers which don’t have anything to do with the content they provide is getting weaker as the number of people who are profoundly hurt, lives destroyed by content the tech giants provide is growing. Who, after all, designs and optimizes the robots (algorithms) that identify and accelerate the most shareable content, no matter how dangerous or perverse? Do tech companies have any responsibilities at all when the content they enable causes harm?
Patrick (Washington)
Eliminating safe harbor will hurt newspapers, untold number of bloggers, and others. It will force these publishers to shut down commenting to safeguard against the risk of lawsuits. It will create an entirely new litigation industry. This is a horrifying idea. Why not attack the problem at its root: The widespread availability of high powered weapons that have no other purpose but to kill. The hate messages and forums will still get out. Despite severe laws against bootlegging movies and music the practice is widespread and unstoppable. If you get rid of the safe harbor you might as well shutdown community news sites. I can see the Republicans championing this.
N. Cunningham (Canada)
@Patrick not at all horrifying. You’re panicking. Tryi instead to look to the past — Pre-internet era not that long ago — And the worst excesses of the internet era and you’ll see the problem. It’s real, and it’s what’s really horrifying. The tech community began with unfettered idealism and far too little genuine, profound thought. Now the smart ones are rectifying damages done. The excesses of the industrial revolution were eventually curbed and the world became a better place; it’s early days yet, but the excesses of the digital revolution will be curbed too. It’s a good thing, not horrifying. May you live to see and recognize it.
michele (syracuse)
@Patrick How could eliminating it hurt newspapers, when it already doesn't apply to them?
Patrick (Washington)
@N. Cunningham I've been running hobby blogs for the past 15 years. If you are operating a small community news site, the safe harbor protects you from people who say libelous things, make wild accusations, knock a business for lousy service. Rare, but they happen. These types of comments may violate your comment policies, and you can clean them up after. Eliminate safe harbor and my best option is to shut down comments and direct people to Facebook (which has already gobbled up most of the local ad money). Otherwise, I can be sued along with the commenter. That's what will happen if 230 goes. You shut down free discussion.
Lilly (New Hampshire)
Any other Neal Stephenson fans? (Spoiler alert for Fall) I agree with a premise of his most recent book, the evolution of the internet, or what ‘Dodge’ refers to in its chaotic, unverified state, as ‘The Miasma’, must begin to shift to thoroughly verified identities and edit out the inaccuracies that currently overwhelm society.
tro -nyc (NYC)
Two quick notes: There is a line when ones thoughts cross over from free-speech to hate speech; it may be hard to define but we know when we see it. The other point, unless the author has information that he did not share, it is possible the Johnson and Johnson's motivation for pulling tainted, as well as untainted, product from the market may not have been only motivated by liability issues.
David (Los Angeles)
This is the first generation of young, radicalized hatred and angst, fueled by anonymity online where a bubble of like-minded thinkers can validate each other’s opinion with no challenge or debate. What do you expect to happen? It’s no coincidence these terrorists are 20-25. They grew up in a Wild West version of a new technology left to run amok due in part because most of the world didn’t understand it, and because those left responsible to control it couldn’t catch up in time. The damage of these online communities can not be understated. I say this as someone who grew up alongside them (I’m 27). I fear we won’t see real change until my generation is placed in positions of power. We understand it on a level others simply cannot.
Brian (Ohio)
@David What would you replace the first amendment with? A European model is obviously doable but do you think anything is lost? What ideas words expressions should be forbidden?
XY (NYC)
This essay basically argues that freedom and ideas are dangerous and there is absolutely nothing more important than safety. Hence, we should support censorship of unpopular ideas. Except, we shouldn't call it that. In the real world, today, ideas are not much spread by the press (which is dying) or "the news" (does anyone actually watch TV news anymore?) or even books (I haven't seen anyone read a book in years) but online, on various websites and perhaps most importantly, on social networks. I guess these authors believe in the wisdom of our political leaders are government to always make the right calls when it comes to censorship. I guess they forget our government originally felt blacks should be slaves, women shouldn't vote, and being gay was criminal. Not a good track record if you ask me.
Ron Landsman (Garrett Park, Maryland)
@XY Hmmm, haven't seen anyone read a book in years? You're revealing more about yourself than you intended. The issue isn't censorship. It's imposing liability on publishers. The question is, are Facebook et al more like a newspaper, which chooses what to print, or a telephone common carrier? With their algorithms and selection, they lose any claim to be simple common carriers. Imposing liability is in the best First Amendment sense value neutral. Great idea that cuts through a lot of noise.
Matt (New Jersey)
@Ron Landsman - The issue is exactly censorship. If websites became liable for anything their users posted, they wouldn't allow users to post anything. And by the way, phone calls are routed by custom algorithms. The days of switchboard operators are long gone.
Peter Dale (Detroit)
@Matt Your claims are plain wrong. Websites would not become liable for "anything": they would need to show a good faith effort to curtail certain things, certainly doable. The routing of telephone calls does not call for algorithms, it is done by automatic binary calculation, a sped-up version of what switchboard operators did long ago.
James Ribe (Los Angeles)
Television is regulated by a government agency whose mandate is specifically defined by statute and whose governance is subject to accountability through the political process. What you're proposing though, Mr. Taplin, tort liability. TV is immune to tort liability for its content because it is supervised by a government agency. Under your proposal, YouTube would not be immune to tort liability; in fact, exposing it to tort liability is your proposal. But, as the Supreme Court noted in New York Times v. Sullivan, tort liability is open-ended, and as such becomes an effective means of political censorship. I strongly suspect, Mr. Taplin, that that is your real purpose.
DavidK (Philadelphia)
Opening social media providers to lawsuits is the worst possible way to regulate them. It puts enforcement in the hands of anyone who feels aggrieved that about a snide comment on Facebook or a YouTube video they think invades their privacy or incites violence (even if no one else would agree with them). As the internet shows us every day, there are a lot of nuts out there and even the most frivolous lawsuits cost tens of thousands of dollars to defend. Social media would be forced to post only content that couldn’t possibly offend anyone—in other words, happy birthday greetings and cat videos. Even then, PeTA would probably sue for invasion of the cat’s privacy
N. Cunningham (Canada)
@DavidK and how would you regulate them? Or do you prefer anarchy and human misery?
Mike McGuire (San Leandro, CA)
If the tech companies can target advertising and get rich off doing so, they can have human beings read what they, yes, publish. This is how you make the market -- in this case, the fear of being sued -- work in society's interest. not against it. I'm sure The Times, and other newspapers could argue (though they haven't) that they can't possibly read and be responsible for the millions of words that go into a newspaper every day, but somehow they get it done, because it is very much in their financial interest to do so. It's time for the Internet companies to at least stop compounding the problems they to introduced to society in the first place.
TL (CT)
The thought police are out in full force. They loved these platforms when the Arab Spring kicked off, and for all of the positive calls to action. Now they want to shut them all down because Hillary lost an election. They act like questionable content was nowhere to be found pre-Internet, like there was no Penthouse, Faces of Death videos or Turner Diaries before the Internet. In fact, they did exist and were legal. The Internet was built on content on content you couldn't find in a library. Blaming Facebook or Google for society's ills may make life easy for politicians, but it's not fair or accurate. The economic incentive increasingly compels the clean-up of content on these platforms. It may be an imperfect balancing act, but it is one that has been underway for years. And the sad truth is that the people looking for the Christchurch massacre didn't have it foisted upon them, they sought it out. Vulnerable minds will always find a trigger somewhere. By all means, let's have the platforms do better, but perfection is impossible. Shutting down these platforms for indirect roles in these incidents ignores the overwhelming social good they do. They deserve better than to be treated as political punching bags and shakedown targets - and I don't even agree with their political bias.
JMC (Lost and confused)
@TL Just because people seek something out doesn't mean that society should provide it. People seek out rockets, bombs, crack, WMD; that doesn't mean society should enable them. Because something exists in some dark corner, society doesn't need to provide it a stage or a place of shelter. Freedom without responsibility leads to chaos and mayhem. Freedom is not a return to barbarism. The sad truth is that Big Tech can do a lot better when they have financial liability. Try posting a copyrighted song on YouTube and the algorithm catches it in less that a second. Same with prohibited items on ebay, they get caught immediately. Why? Because laws force them to and there are significant penalties.
WV (WV)
How would, say somethig, like art museums or art galleries fair under a safe harbor law? They are only providing a location to which an artist can display their art. Galleries and museums would not be responsible for the content of the artworks they are allowing to be displayed. But yet, we as society, DO hold them responsible for artwork that we deem inappropriate or offensive. And as a result, many museums and galleries scrutinize potential exhibitors more thoroughly now. Some are willing to task risks (as a means of protecting speech) while others are not.
michele (syracuse)
@WV A gallery is not an online social media site. It would be totally unaffected by Taplin's proposed change,
Brian Harvey (Berkeley)
"Verizon, for example, is a passive intermediary platform: It makes no attempt to edit or alter the bits flowing through its fiber optic cables." This is not true. Verizon wants to be allowed to determine the speed at which it transmits packets based on their content. It reads the content to enable targeted advertising. The point is *NOT* that Verizon is the same as 8chan! The point is that this op-ed draws the wrong line. Predigital communications law had this right: It established two categories of information processor. One was the "common carrier." Common carriers may not open messages, or if (like telegraphy) the medium requires their employees to read the messages, they may not use that information for any purpose other than the transmission of the message. They must offer their services to all customers on an equal basis. And, in return, they are protected from liability for the messages they transmit. The other category is a publisher, which decides what to transmit, and does not have that protection. This is an easy distinction to draw. The predigital phone company, the post office, FedEx, Western Union -- those are common carriers. TV or radio stations, the New York Times -- those are publishers, with liability for their content. Digital media wanted to have it both ways, and their lobbyists convinced Congress to let them. If you give Verizon safe harbor *without requiring it to follow the common carrier rules*, it's hard not to allow 8chan too.
michele (syracuse)
@Brian Harvey By your definition ("they may not use that information for any purpose other than the transmission of the message. They must offer their services to all customers on an equal basis") Google, FB, 8chan, etc are *not* common carriers. They do not simply transmit the information; rather, they use the information for many other purposes, such as advertising, data mining, and customizing content.
Jim (Moffet)
YouTube is not hard to replicate from a technical standpoint. You could spin up a rival with a couple million bucks. The reason YouTube effectively has a permanent monopoly on video sharing is because it costs a couple hundred million dollars to develop automated copyright filters, without which you effectively cannot operate. Moderation in billion-person communication networks is the hardest technological problem humanity has ever attempted to solve. It will take orders of magnitude more money and hours of labor than a manned mission to Mars. It literally may not even be possible to strike a balance between free speech and hate speech using machine learning. If you have evidence to the contrary, myself and the rest of the ML community would love to see it. What gives you cause to think a company like Facebook, who literally hasn't changed their product interface in 6 years, is capable of solving this problem internally... money? We shouldn't need to discuss how problematic that assumption is. If you think these platforms are doing a bad job now, try locking in their monopolies until the end of time by raising the barrier to entry to the insane degree this article suggests. This is an incredibly hard problem with very little existing market incentive to solve. It is going to require massive government investment in fundamental research and the construction of a competitive market for large-scale moderation tools. You can't trust FANG to do this well.
Randall (Portland, OR)
@Jim Given the choice between letting Alphabet stockholders to make money hand over fist and not getting murdered by a white supremacist, I think most of us will choose the not getting murdered. YouTube is not an essential part of life; it's an entertainment channel. If it goes away because we forced them to be responsible: oh well.
The Observer (In fair Verona, where we lay our scene)
No, don't ban ANY speech at all. Require the platforms to have absolute certainly that they know who posted what, and if people are done wrong by posted speech, we already have a tort system for bad actors to be sued for damages or, rarely, busted for existing crimes. Attempts by our hard-Left technology class to police individuals' speech will NEVER work and is the joke of the internet. The exact same things typed in about our general culture or the GOP get a pass while being said about client outraged groups patronized by progressives get writers banned. All that has to be done away with. And all the big tech companies need to be cut up into at least 5 parts running east-to-west across the country, with California ending up in at least three parts regarding each of these four: Google, Facebook, Twitter, and Amazon. Amazon can't even tell if it's selling real or bootlegged merch, and the speech police at Facebook or Twitter are so stressed they are falling ill on the job.
N. Cunningham (Canada)
@The Observer yes, do that. But where’s the will? Tech community has no stomach for it.
BeenThere (USA)
I can tell you that the tort system doesn't work for internet defamation, because we're living that right now. Before the Internet, how could someone have managed to publish defamations to the entire world? It would have taken enormous resources. The defamer would have been findable. We have a judgment against someone who is quite literally in hiding. He is still publishing vile fabrications not only against the people he is mad at, but their children & other relations, whom he has never met. Defamations which will turn up on a google search of those children. The California Supreme Court recently interpreted the Communications Decency Act & the US Supreme Court declined to review. A Yelp review was found to be defamatory (& this wasn't a "difference of opinion" type situation, these were false facts). Unless the person defamed can find the poster & force them to take the review down, Yelp has no obligation to remove it. So it stays up. Nor do tech companies cooperate in helping identify anonymous posters. We know who is defaming us, but in many cases without the cooperation of the tech companies it is difficult to get the kind of proof which will hold up in court. Law enforcement rarely steps in, the scope of the problem is too overwhelming. Which is, again, related to the way the internet has made it so easy to do enormous harm. We don't want money from the tech companies, we just want them to have to take the defamation down & remove it from search results.
DL (Berkeley, CA)
The problem of banning is the same as with other illegal staff - it goes underground and fosters there in a much worse form. Do you really want these people to move to Dark Net?
Lilly (New Hampshire)
There is a level or two of technical facility that comes with accessing the dark net, and one must use Tor(?) which is an identifiable red flag for parents to monitor, which may make it less accessible to the average teenager who might be vulnerable to such negative influences, at least. Maybe?
DL (Berkeley, CA)
@Lilly Unfortunately these people derive such huge value from being a part of the conspiracy and acting like the "underground" that, I am afraid, the benefits for them will outweigh the costs of access and adaptation.
Iridiumred (Lake City, Iowa)
@DL Yes, actually, I do.