Social media giants juggle freedom and responsibility

Violence at the United States Capitol last month has social media companies in a regulatory spin. They are debating how to moderate user activity while maintaining a delicate balance with freedom of speech.

Twitter’s answer to the Washington riot that left five people dead was to ban former US President Donald Trump from its platform. It announced the permanent suspension of @realDonaldTrump, the handle he used to share his thoughts with more than 88 million followers. The company explained that they arrived at the decision ”due to the risk of further incitement of violence.”

Twitter reviewed two messages that Trump posted days after the riot and determined they were highly likely to encourage and inspire people to replicate the criminal acts that took place at the U.S. Capitol. The company also found multiple indicators that the posts were being received and understood as encouragement to engage in such behavior.

Until his ban, Trump and Twitter had built a mutually beneficial relationship. During his presidency, Trump drew enormous attention to the platform by using it to announce new policies and staff changes.

Twitter’s decision to cancel Trump’s account has sparked a global debate about whether it was the right thing to do. Other social media channels – including Facebook – are also being scrutinized for banning the former president.

Twitter announced the permanent suspension of @realDonaldTrump on January 8.

German Chancellor Angela Merkel considers the Twitter decision “problematic”. She suggests that any kind of interference with people’s freedom of speech should be part of a legal framework – not the sole decision of a private firm. A Washington Post editorial stated that “Trump’s removal from social media was warranted — but arbitrary. It’s time for clearer rules.”

RonNell Andersen Jones, Law Professor at the University of Utah and a first amendment scholar whose research focuses on media law issues.

“I don't have concerns necessarily about them removing the president of the United States because he has many, many other ways to communicate his thoughts,” she says. “But there's a question about if Twitter or Facebook has this much power, what if the next person they remove is someone who doesn't have that much control, or doesn't have other avenues for communicating their ideas? Then we start to have some concern about the impact that these companies might have on our free speech environment.”

She adds, “We are going to have to have hard conversations about both self-regulation by these companies and the extent of governmental regulation that we can have.”

The rapid growth of social media, along with concerns about fake news and disinformation, have national governments considering regulation. In Japan, a study group at the Ministry of Internal Affairs and Communications, published a comprehensive report last year. The group recommended that the voluntary measures social media operators put in place themselves should be respected – but closely monitored by the government.

Ikegai Naoto, Associate Professor of the Faculty of Economics at Toyo University and a member of the government study group, says Trump’s removal from multiple platforms will spark a global debate about social media regulation.

“American tech companies tend to be willing to innovate first and then address the problem, but the risks have become apparent in many ways over the last few years,” he says. “Impacts on social order must be carefully considered more than ever in their future service.”

“We should always keep in mind the fact that rules in cyberspace are decided and operated by a small number of firms, not just by the law. We need to reconsider democratic control over these companies and how to improve their transparency.”

Social media firms are experimenting. Twitter has introduced Birdwatch, a pilot program that incorporates a community-driven approach to address misleading posts. Facebook's Oversight Board, a group of journalists, politicians and judges from around the world, issued its first round of decisions last month – and found that the company had acted inappropriately in the removal of some content. The board was established last year and claims total independence from Facebook.

The big players are showing they recognize a need to regulate content. But ongoing concerns surround the transparency of that process, and whether the companies themselves are best placed to police their own standards.