Views and announcements

Disinformation kills. Now what are we going to do about it?

  • Trump’s use of social media has long been a cause for concern and the final straw came when he used it to encourage protesters in Washington DC to march on Capitol Hill, sparking the storming of the building... But are these moves taken by the tech giants too little too late? And what about the concerns about disinformation, conspiracy theories and hate crimes that are frequently spread by social media?

    In this opinion piece Dr Alexi Drew, Postdoctoral Research Associate at the Policy Institute at King’s College, London and an expert on the role of social media in conflict escalation, she argues that now is a time for a grown-up conversation about how to curb the sometimes destructive  power of the internet.

    The well is poisoned. The means by which a growing percentage of the world gathers information relevant to their daily lives is filled with ‘alternative facts’, misleading narratives, click bait, and intentional efforts to undermine public trust in institutions and agencies empowered with their protection.

    I am not exaggerating when I say that I, and others who study or analyse disinformation the world over, are sitting at home with the loudest and most perverse ‘I told you so’ echoing around their minds. What happened in Washington DC did not come from nowhere; it was not an unpredictable outcome of unseen and unknowable forces. It is the result of an information ecosystem that has been poisoned by the well-meaning and the actively malicious alike. So, what do we do?

    Censorship - the dirty word

    Difficult questions have been put off too long, the buck has been passed too many times by platforms sidestepping responsibility for content moderation under the guise of free speech. Governments and institutions have held hearings, issued proclamations of good intent and have then passed the buck back to platforms and waited with bated breath for the one-click solution.

    RELATED: Are tech giants doing enough to combat fake news?

    The miasmic bubble of disinformation that has burst forth on the grounds of the US capital is not the result of a simple tech problem, that can be fixed with a few tweaks of algorithm or some changes to the user interface. This is a problem facing the whole of society which requires a whole of society solution. However, one thing is clear. The status quo with regards to content moderation and the role of platforms in securing the health of our information ecosystem is not good enough. So, we need to talk about a dirty word - censorship.

    End of innocence

    I know some of you who read this will quickly say, think, or tweet that censorship and content moderation are not the same thing. Or, that these platforms are privately owned spaces that can morally and legally police the content produced by their users however they so wish.

    You would not be wrong. But we need to talk about censorship because the people that we really need to convince of the need and rightness of whatever we decide to do will call such arguments semantics. They will, quite rightly say, that social media platforms are de facto public spaces that provide a digital public space in which they should be free to communicate whatever, to whoever, in whatever format they wish.

    A great part of me longs for the more innocent days of my forays into the study of the internet when, inspired by groups like the legendary hacker group  Cult of the Dead Cow and people like Aaron Schwartz a political activist dedicated to an open internet. I thought that the internet could provide a freely accessible and open space where restrictions of any kind would not only be unnecessary but would also be anathema to the very social dynamics that such digital spaces would foster.

    Changing times

    It is with a sad heart that I have had to realise that events such as what happened on Capitol Hill increasingly puts such a dream further and further out of reach. Which means we need to deal with the counter-liberal elephant in the room.

    We need to have an accessible, representative, and undoubtedly difficult route towards effective content moderation with language that can make it clear to the end users - the public - why this plan is being put into action, what it seeks to achieve, and what is being done to ensure that it isn’t misused.

    RELATED: COVID-19: Why do people create fake news and why do others want to believe it so badly?

    In effect, we have three questions that need answering:

    1. Whose job is content moderation, anyway?
    2. What should and should not be moderated?
    3. What approaches to content moderation are there and how well do they work?

    Simply, the responsibility for content moderation, its goals, and methods, should be a joint one.

    Responsibilities as well as rights

    Social media platforms should not be the final arbiters of truth nor the sole sources of information. The optimist in me would like to divide this responsibly into parts, with the state responsible for deciding what is and is not a risk to society and platforms for policing and moderating the content that fits into this category. The pessimist in me, who normally wins out, cannot help but note that trust in governments, political parties, international institutions, and indeed experts is at an all time low. For this to work there must be trust, there must be legitimacy.

    RELATED: Why fake news isn’t everyone else’s fault

    Our best bet, I think, is joint responsibility of government, non-governmental organisations, activists, academics, and industry across three areas of effort that all contribute to the ‘job’ of content moderation. Deciding what should be moderated is something which must be based upon the values, norms, and principles of those who are likely to be moderated assessed based upon the risk that any content could pose to information health if allowed to run rampant on these platforms.

    This is not a call that any one person or entity can or should make. It is not call that any single institution or individual has the knowledge to make. It requires the cooperation of representative society groups coupled with government, industry, and expert knowledge to decide not only what should be moderated, but how.

    Current state of play

    Content moderation already happens. It has become more notable and more apparent as platforms have not only stepped up their efforts in response to the recent US presidential elections and have also made these efforts more public.

    Key point: what works for one platform does not necessarily work for another. Almost every social media platform has evolved its own means of content moderation. Some of which are more centralised, some of which are community driven, and some of which are supplemented by algorithms. Here’s just a few examples:

    • Reddit - Upvoting and downvoting content.
    • WhatsApp - Limiting message forwarding to five chats.
    • Wikipedia - Recent Changes patrol, watchlisting, and counter vandalism vigilance.
    • Twitter - Questioning whether you have read an article before retweeting it.
    • Facebook - Fact checking and content flagging.

    Some of these approaches have been applied to more than one platform and others are more unique to the platform upon which they have evolved and, in some cases, become a central dynamic to how this platform operates.

    RELATED: Is artificial intelligence the future of journalism?

    The point to take away from this is that transplanting a method of content moderation that can be demonstrated to be effective (we’ll come back to this) from the particular information ecosystem for which it was designed and onto another doesn’t automatically mean that the same results will occur.

    Wikipedia’s system for patrolling recent edits to content or policing vandalism would be incompatible with Facebook or Twitter. Similarly, WhatsApp’s limits on message forwarding has no bearing on Wikipedia.

    Cooperative approach

    The operators and designers of platforms are best placed to understand how best to moderate the platforms that they create and oversee. Their input into content moderation is crucial and should be encouraged. It should also be assessed. The cooperative element of content moderation is key. No one actor or type of actor should be given the keys to the chicken coop.

    The best solution I can propose to the problem of trust and responsibility is that everyone with skin in the game should be jointly responsible for the rules and how well they’re adhered to. An added benefit of the cooperative approach is a more complete understanding of the implications of moderation targets and efforts. The road to good content moderation is already paved with legitimate accounts and posts removed or flagged out of the best of intentions.

    The role of algorithms

    Some approaches to content moderation have themselves become a means by which the information ecosystem can further be poisoned. A tool created with the intention of limiting one type of harm has, in effect, created the means of causing a different kind. One example of this is the opaque means by which Twitter polices its platform through the suspension or removal of accounts through algorithmic automation and informed by user reporting. The capacity to report potentially harmful or illegal content is not bad.

    What is bad is the way that this algorithmic system has had its operating parameters tested and then subverted through mass reporting of innocent accounts leading to their automated removal. This well intentioned means of content moderation through the conjunction of community engagement and algorithmic policing has led to the ability of interest groups, activists, and extremists to turn the ploughshare back into the sword.

    RELATED: BCS survey finds over half of UK adults don't trust algorithms to make decisions

    When the journalist Salil Tripathi had his Twitter account suspended it quickly became clear that he had become the target of a group of pro-BJP Twitter users who deployed mass reporting as a means of having his account, where he was often critical of BJP policies, removed. This is just one isolated incident of a type that can be found in almost all examples of inter group conflict on Twitter. It is also only one example of a moderation method which has either been subverted for misuse or has had unintended consequences.

    This brings us to the final question which, I’m afraid, I’m going to answer with a direction of effort as opposed to a definitive plan. We need to have transparent and effective evaluation of content moderation policies and effects. We need to learn what works on what platform and we need to be able to close loopholes which allow for abuse sooner rather than later. Flaws of content moderation should not be allowed to fester and become dynamics that contribute to the very problem that they were designed to counteract.

    RELATED: Bazaarvoice: The state of social media behaviour in 2020

    This constant evaluation should also not be limited to the means of content moderation. The targets of moderation must be continually examined and assessed. One of the greatest risks of censorship, of any kind, is that it is a final and unassailable act. Content moderation should seek to temper the risk posed by such extremes by being subject to constant introspection as to whether that which is being moderated should be subject to such restrictions. Our concept of what is right and wrong now, or what is helpful or harmful, is not immutable and we should not create policies or platforms that treat it as such.

    A final closing thought, to put all of this in perspective. As of January 2021, we have yet to have a single human death that can be attributed to a cyber-attack. The closest we have come is a death in a hospital that had been the victim of a ransomware attack. As of 7 January 2021, we have a far clearer causal chain between online conspiracy theories and disinformation and the deaths of five people due to the riots on Capitol Hill. There is a lesson here, we need to learn it.

    RELATED: We need to think about how social media combined with traditional media is affecting mental health

    Source: BCS

    About the author

    An article that is attributed to Sync NI Team has either involved multiple authors, written by a contributor or the main body of content is from a press release.

    Got a news-related tip you’d like to see covered on Sync NI? Email the editorial team for our consideration.

    Sign up now for a FREE weekly newsletter showcasing the latest news, jobs and events in NI’s tech sector.

Share this story