Chevening Debate: Should social media firms be held accountable for harmful content shared on their platforms?
Debating combines critical thinking, planning, research, strategising and effective presentation skills, while encouraging a healthy questioning attitude.
It calls into play the ability to articulate your thoughts while thinking on your feet and keeping your personal emotions in check. This makes it a powerful skill to have in both personal and professional situations, across fields.
Last week I had a chance to attend a debate organised by Chevening at Durham University on a very topical subject: ‘Should social media firms be held accountable for harmful content shared on their platforms?’ The experience was heightened by the venue – the Debating Chamber at the majestic Durham University building, just by the Palace Green.
Scholar Feifan Li
There were three members on each side, for and against the motion. The team arguing for the motion explained the difference between illegal and harmful content. Sharing something like fake news is not illegal but can cause some serious harm, especially with the speed and reach of spread through social media.
The algorithms used by social media firms for targeting audiences for certain content could cause more harm to those who are already vulnerable, for instance teenagers facing mental health issues. Given that social media firms are already moderating content, it is possible for them to do more.
Tony Koutsoumbos – founder and director of the Great Debaters Club
Governments need to lay down clear rules, with clear definitions in their regulations, so that social media firms have a reason to prioritise public interest over commercial interests. The involvement of an independent regulator was also proposed, in order to uphold the principles of democracy during this moderation.
The team also emphasised the importance of moderation in content curation, and argued that this cannot be disregarded for social media, simply because it is an online mobile platform. The idea of reworking targeting algorithms to ‘de-target’ users of certain demographics was proposed as a workable modification to existing systems.
The team arguing against the motion opened by accepting that harmful content on social media is a major issue, but proposed to address it by encouraging users to be more responsible instead. The lack of a single agreed definition of ‘harmful content’ was brought to light. Additionally, some of the consequences of harmful content on social media do not have a clear proven link to it, emphasising that correlation does not imply causation.
The role of users is important in the regulation of harmful content, for instance Facebook allows users to flag certain posts which are then reviewed and moderated or taken down. The sheer volume of content, lack of human resources and inefficiency of current algorithms were cited as reasons why it is impractical to expect social media firms to be accountable for the content shared.
The solutions proposed by them include users posting or re-sharing content should be responsible, and that the viewers should also build resilience to the content they see on these platforms, through training in digital literacy. Unintended consequences of a regulatory policy that penalises social media firms for harmful content include disproportionate harm to a single social media platform, simply for being the most widely used ones. Legitimate benefits of the policy carry unacceptable cost by only making the problem invisible, instead of solving it.
An impactful closing statement made by Motheo Mtimkulu, arguing for the motion:
‘Before 1964, it was not a legal requirement to wear seatbelts in cars. As a concept, imagine social media is a car and all our policy is a seatbelt. Is it going to solve all car crashes? Probably not. Is it the cause of all car crashes? Probably not. But does it assist in mitigating the negative consequences of car crashes? Hell yeah.’
Scholars having animated discussions after the debate at the drinks reception