Search This Blog

Monday, April 27, 2020

Corona virus and disinformation

These days disinformation is developing as fast as the coronavirus spread. This is a reason for worry: the World Health Organization (WHO) is concerned about an “infodemic,” a flood of accurate and inaccurate information about COVID-19. However, it is not currently possible to trace any of the conspiracy narratives it has brought forth. 
The COVID-19 disease has forced social media companies to take a more active stance against disinformation. An example of the stronger attitude towards false information came on March 31 2020, when Facebook, Twitter and YouTube all banned videos from Brazil’s President Jair Bolsonaro through which he advised the public how to treat the novel coronavirus by taking in an antimalarial, chloroquine. Rumors about the magical quality of chloroquine were spurred on earlier by President Trump’s voicing support for the usage of the medicine. The chloroquine controversy represents a type of disinformation that is simple. Doctors do not advise people to take the cure to treat or prevent the novel coronavirus, and so anyone saying otherwise is clearly spreading disinformation. Yet the most insidious information being spread about the coronavirus is not so easily stopped.
So far Facebook has had the most clear-cut policy on COVID-19 misinformation. It relies on third-party fact-checkers and health authorities flagging problematic content. It also blocks or restricts hashtags that spread misinformation on its sister platform, Instagram. Among social platforms Twitter and YouTube have taken less decisive positions. Panic-producing tweets claimed prematurely that New York was under lockdown, and bots or fake accounts have slipped in rumors. The widely read @realDonaldTrump has tweeted misinformation. Elon Musk, founder of Tesla and SpaceX, tweeted a false assertion about the coronavirus to 32 million followers and Twitter has declined to remove his tweet. John McAfee, founder of the eponymous security solutions company, also tweeted a false assertion about the coronavirus. That tweet was removed but not before it had been widely shared. On the other hand, YouTube removes videos claiming to prevent infections. YouTube has taken the approach of pairing misleading coronavirus content with a link to an alternative authoritative source, such as the Centers for Disease Control and Prevention (CDC) or World Health Organization (WHO). However, a video from a non-authoritative individual with the CDC or WHO logo attached could unintentionally give viewers the impression that those public health authorities have approved the videos.
All three companies have offered free ads to appropriate public health and nonprofit organizations. Facebook has offered unlimited ads to the WHO, while Google has made a similar but less open-ended offer. Twitter offers Ads for Good credits to fact-checking nonprofit organizations and health information disseminators. The social media companies have attempted to adjust to the situation and respond quickly to the Corona virus threat but they can do more. They could use the moment to rebuild trust with the public and with regulators. However, neither of these companies has a transparent blocking policy founded on solid fact-checking. It is essential that they hold their users’ attention and influence positive behavior in response to COVID-19.
In order to battle disinformation Facebook, Instagram and WhatsApp are now incessantly publishing top tips from the WHO, in many cases whether they like it or not. Until recently not many people would have contested the claim that the WHO is the ultimate global authority on such matters.  It is the famous name that supports the great image of the organization but the tendency of its senior leadership to flatter China in its communications may quickly undermine trust in it.
Some governments have also taken seriously the task to stop misinformation in a pandemic situation. The BBC reported that the UK government, for instance, is cracking down on false information in the form of “a rapid response unit within the Cabinet Office [that] is working with social media firms to remove fake news and harmful content.” There is no definition of ‘harmful content’, but the government seems worried that people could die as a result of being misinformed. Such general rules based on vague notions may prove risky for freedom of expression and information. 
The EU has also announced that it is working in close cooperation with online platforms to encourage them “to promote authoritative sources, demote content that is fact-checked as false or misleading, and take down illegal content or content that could cause physical harm.” The European Commission suggests people follow the advice of public health authorities, and the websites of relevant EU and international organisations: ECDC and WHO. 
The EU is also debunking false assertions stressing that “there is no evidence that 5G is harmful to people’s health”. It also adds that while the EU actively promotes vaccines to guarantee public health, there are no plans rooted in the Corona pandemic to impose mass vaccinations. On the other hand, there are plenty of people spreading unscientific anti-vaccine appeals. These calls prey on emotions and fear and may cause significant harm to public health and safety.
Compiled by Media 21 from:

No comments:

Post a Comment