Twitter’s decision to take down several posts in relation to the farm protests and its criticism has brought the spotlight back on a crucial debate relating to how social media companies are classified: they claim to be technology platforms, but how they moderate – and curate by way of algorithmic prioritisation — their content is similar to editorial decisions exercised by media companies.
On Thursday for instance, the social media company took down tweets of Bollywood actor Kangana Ranaut. Earlier in the week, it temporarily blocked some accounts in India before unblocking them on the grounds that the company believes in “protecting public conversation and transparency”, according to a person familiar with the matter who asked not to be named. “We have taken action on tweets that were in violation of the Twitter rules in line with our range of enforcement options,” a Twitter spokesperson said in a statement.
Growing intervention by tech ‘platforms’
The classification of social media companies is crucial because they are at present beyond the regulatory scope that extends to media companies, requiring the latter to follow guidelines on speech and expression — a structure that experts have long held is crucial since mass media has large influence on politics and culture.
Not only do these companies decide who gets to post what, they influence and regulate which content is amplified and which isn’t.
The companies, Facebook, Twitter, YouTube and others, base their characterisation as technology platforms on two main pillars: that the content they host is not generated by them, and that code – not human intervention – determines how it is displayed. Their moderation is based on self-drawn guidelines meant to combat hate speech and unlawful media.
At the core of this characterisation is Section 230 of Communications Decency Act in the United States, which has physical jurisdiction over the world’s biggest tech companies. It lays down that internet companies will not be held liable for the content they host, and that they cannot be held liable for good-faith efforts to hide, remove or filter user-generated content. This law has been regarded as the bedrock for an open, free internet.
Too much or not enough
But increasingly, their moderation has been debated. One such example is the decision by Facebook and Twitter in 2020 to block access to a news article about then presidential candidate (and current President) Joe Biden’s son Hunter Biden. Critics condemned the companies for blocking a report that a news organisation deemed fit to publish, while the companies cited rules against carrying material obtained through hacking or illegal sourcing.
At the same time, the platforms – particularly Facebook – have been criticised for not doing enough in some other circumstances: They were criticised for not curtailing conspiracy theory groups such as QAnon and hate collectives such as the Proud Boys, which were among those that laid siege to the US Capitol on January 6.
In a 2017 paper published in First Monday, a peer-reviewed open access journal for research on the Internet, public and tech policy experts Philip Napoli and Robyn Caplan write that treating social media companies as tech firms ignores the social aspects and influence of their reach, as also the fact that their main revenue source is content, the same as media companies.
“The framing of social media platforms and digital content curators purely as technology companies marginalises the increasingly prominent political and cultural dimensions of their operation, which grow more pronounced as these platforms become central gatekeepers of news and information in the contemporary media ecosystem” they write.
In the paper, Napoli and Caplan argue that traditional media roles of “1) production (exemplified by content creators such as news outlets and television studios); 2) distribution (the process of moving content from producers towards consumers); and, 3) exhibition (the process of providing content directly to audiences)” have now merged and evolved due to digitization of media.
They also target the defence that computer code is responsible for editorial functions, and that there is no human intervention. “The asserted absence of direct human editorial involvement helps to further this perception of distance from, and/or neutrality in, the content selection process — a model that is presumably fundamentally different from the kind of direct (and human) editorial discretion that has been a defining characteristic of traditional media companies.”
But, they add, “simply because the mechanisms for exercising editorial discretion — for gatekeeping — have changed doesn’t mean that the fundamental institutional identity of the gatekeepers should be recast”.
A crucial similarity between these social media companies and media companies is the centrality of advertising as a revenue source, Napoli and Caplan write.
Changing nature of media, technology
The debate, however, also needs to account for the changing nature of media and communications and the centrality of user-generated content – a factor the authors of the paper also recognise. “One could just as easily argue that it is necessary and appropriate for our understanding of media companies to evolve to fully encompass the structure and operation of these platforms”.
But regulatory parity may be necessary, and appears to have been identified in some manner by big tech leaders. In December, 2016, Facebook CEO Mark Zuckerberg described Facebook as “not a traditional media company”. Google in a 2005 regulatory filing stated, “We began as a technology company and have evolved into a software, technology, Internet, advertising, media company”.