To mitigate online falsehoods, tweak how information is distributed

Apr 11, 2018 at 12:06 pm | Hits: 797

By Benjamin Goh and Natalie Pang

The last few weeks of hearings by the Select Committee on Deliberate Online Falsehoods have shown us the complexity of dealing with “fake news”.

Hearing the myriad viewpoints from traditional print media companies, social media platforms and social science researchers has shown us that the modern information ecosystem has transformed an otherwise unremarkable proposition – that sometimes, people lie – into a complex problem without a clear solution.

Misinformation is not new, but what is new is the channels through which it can easily reach people and, at times, shape public opinion.

At the same time, although these channels are prone to misuse, it is worth remembering today’s diverse ecosystem of information (from mainstream media outlets to online publications such as Mothership) also empowers robust debate, which fosters civic resilience.

By adjusting the ways in which published information gets distributed on the Internet, regulation can harness the existing benefits of having diverse news outlets while limiting the ability of nefarious actors to mislead.

The biggest difference between the news ecosystem of today and the mass media of the 20th century is the separation of roles between the publisher and the distributor. Newspapers used to be responsible for both publishing and distributing their content. Now, although newspapers still curate and publish content, news articles are often distributed on social media and digital platforms (such as Facebook, Twitter, or Whatsapp), which is subject to the algorithms and governance models of these companies.

Most media regulations, such as our 1974 Newspaper and Printing Presses Act, were set in a time when regulating the publisher would de facto regulate the distributor, and are therefore inadequate to deal with online falsehoods today.

This means that further regulation of publishers would do little to address the problem of falsehoods. Instead, we need to tighten the mechanisms by which published information gets distributed to users to create a more robust news ecosystem.

First, social media companies, as mass distributors of information can be compelled to implement automatic news verification.

The first way to do this is to employ algorithms that could help to fact-check, which are scaleable to include the millions of articles that get published daily.

Online fact-checking software Factmata, for example, has built a tool to check simple statistical assertions such as “our aid work in Somalia is paying dividends — only 0.2 per cent of the population is severely malnourished”. As natural language processing technology progresses, automatic fact-checking will increasingly be able to play a bigger role in supporting a more robust ecosystem.

Another way social media and technology companies can carry out automatic news verification is through analysing metadata. Every online article leaves a metadata trail of retweets, shares, and user mentions.

In other words, content has records and meta-data, which can be used more pro-actively. Researchers at MIT have shown that there are significant differences in meta-data between fake news and regular news, possibly due to the intention, form, and distribution networks that propagate them.

Imposing standards on distribution companies to leverage these meta-data patterns in filtering out falsehoods would limit the nefarious spread of falsehoods without taking sides in any debate.

Secondly, digital platforms should also be responsible for the attention asymmetry between fake and real news.

Studies have shown that falsehoods drive more attention, especially when they adhere to local stereotypes; what is problematic, however, is that corrections, even though necessary, do not receive as much attention, accentuating the impact of erroneous information.

In the 2013 Little India Riot, for example, local media first tweeted that two Bangladeshi workers died at 1am on Dec 9, 2013. The tweet was retweeted 124 times within an hour. But the subsequent correction – that only one Indian worker was killed in the accident with a bus – only reached a similar scale of retweets more than eight hours later.

As de facto “guardians” of content distribution, social media companies should be encouraged to enforce standards of delivering corrections to minimise the probability of an error percolating on social media even after conscientious publications have moved to correct reported inaccuracies.

This was the approach taken by the Network Enforcement Act (NEA) in Germany, which requires social media platforms like Facebook and Twitter to actively monitor contentious content and remove it within 24 hours if it is found to be “illegal”.

But as critics of the NEA have pointed out, we must also be mindful that social media and technology companies do not, and should not have sole responsibility. Information that is publicly distributed and consumed is a public good, and all hands should be on deck to ensure that it is accurate.

Prudent legislation would allow for adequate opportunities for civil society, journalists, legal experts, researchers, the state and citizens to engage in a multi-stakeholder, productive partnerships to tackle the destructive effects of falsehoods.

For example, legal experts and civil society actors can be invited as observers during meetings to create an open system for knowledge exchange and dialogue.

As the Select Committee’s hearings have shown, social media and technology firms can neither be the arbiters of truth nor guarantee that everything on the network is “true”.

(if this were true, satire would not be allowed).

Similarly, it would be administratively tedious and socially untenable if the government assumes the responsibility of a fact-checker—it will move us closer to a dystopian Ministry of Truth that was depicted by George Orwell’s novel, 1984.

What might be helpful, however, is that since we now live in a world where the publisher is distinct from the distributor, we can reduce the impact of nefarious actors on public opinion by tightening the mechanisms by which content is distributed online.

This solution does not assume that any parties should be the arbiters of truth, and retains the vibrancy of the online space to encourage and facilitate discourse, which builds civic literacy and social resilience.

Modern news consumption almost always goes through digital and social media filters. It is worth reminding ourselves of the dynamic information space it opens for us, but also that it is time to implement systems that reduce the potential for abuse on this otherwise vibrant space that both entertains and informs us 24/7.

 

Benjamin Goh is a Research Assistant at the Belfer Center for Science and International Affairs, Harvard University. Natalie Pang is a Senior Research Fellow at Social Lab at the Institute of Policy Studies.

This piece was first published in TODAY on 10 April 2018.

Top photo from iStock.

Comments

Related articles