The huge search and social media platforms of the internet are reaching an inflection point. For decades they have been able to deflect attention from their role as content providers. The issue is now front and center in our national debate and the long-reigning status quo is likely to settle into a new mode of operation, if not consensus.
Current political polarization and the murderous result of an ocean of bald-faced unsupported easily-refutable lies have turned an otherwise dry topic – where do big companies draw the line when deciding which third-party information to host on their systems – into a crisis for our democracy. We have finally reached the juncture where the promulgation and repetition of lies created such an obvious and attributable result that we can’t ignore the causes. Like the production of sauerkraut, failure to scrape all the scum off the top can render the entire concoction sickeningly poisonous.
Make no mistake, the large search and social media companies have always moderated content. The easiest place to see this is their censoring of heavily sexualized content. Google and Bing’s search algorithms and the Facebook/Instagram rules restrict pornography and other content they believe that many people would find objectionable or inappropriate for children. If they didn’t, their systems would be swamped by sex advertisements and solicitations for the deeper debasements of the human id. How do I know this? For one reason, Google and Facebook both tell us so. For another reason, I was on the content restriction team at CompuServe – a digital media company that contracted third parties for content and provided digital spaces for people to congregate. I saw firsthand that if the sex isn’t moderated or cut completely, demand for it will overwhelm the rest of the content. People’s desires may be uncomfortable to discuss, but they are predictable.
In fact, the kinds of statements inciting violence that were recently banned have violated Twitter rules for ages, and Twitter has previously dropped accounts that advocate hatred and violence. However, the stated community standards have not been followed consistently, and social media sites have financial incentives to prioritize controversial and incendiary content on their services – experience has demonstrated that people will spend more time and energy on the sites when those people are angry or upset.
Manipulating people’s emotions and polarizing their populations has been great business for Facebook and Twitter, and it was only in the aftermath of the obvious Russian manipulations leading up to the 2016 elections that a significant percentage of the general public in the U.S. considered calling social media companies into account for the results of their policies of incitement. If you remember the reactions of Zuckerberg at the time, he seemed equally surprised that his networking company, which had been started to let college students share thoughts with each other, could sway elections and be manipulated with serious social cost, though he later acknowledged the naivety.
So the coming changes in content moderation are a matter of prioritizing social responsibility for the platform’s economic interest in polarization and emotional manipulation. There have always been rules here, and the social media companies have always given lip service to enforcing community standards, but now the community may coerce the companies into taking those standards – and their place in our society – seriously. Facebook has acknowledged that its platform has been used to incite and encourage violence in some instances.
Europe, with different laws and social priorities toward freedom of speech, started this discussion in earnest with the big tech companies. When Google executives were criminally charged in Italy for not removing illegal content and a CompuServe executive was arrested in Germany for allowing illegal goods to be sold online, U.S. companies learned that they needed to consider the differing community standards of the countries where their customers resided. Instagram and Twitter operate with greater content limitations in majority Islamic countries and other more restrictive societies. They made these adjustments overseas, so why can’t they make appropriate adjustments to their content moderation in the U.S. and Canada? Google has made content accommodations for the “Right to be Forgotten” in the European Union, so we know that such moves are possible to meet the standards of important communities.
We now know not only that encouraging conflict and distress creates micro-scale problems for individuals – bullied teens, victimized women, sufferers of depression and anxiety – but on a macro scale for our society as a whole. So we need a responsible discussion of how digital content management can be adjusted for the benefits of our communities, even if the adjustments harm the profits of big tech.
It is time for a reckoning with the power and incentives for digital content control, but this should not be driven by the grievance of one political party or the other. It should be driven by a desire to promote the best in our society while reducing manipulation, division, and hate.