Search This Blog

Monday, April 27, 2020

Corona virus and disinformation

These days disinformation is developing as fast as the coronavirus spread. This is a reason for worry: the World Health Organization (WHO) is concerned about an “infodemic,” a flood of accurate and inaccurate information about COVID-19. However, it is not currently possible to trace any of the conspiracy narratives it has brought forth. 
The COVID-19 disease has forced social media companies to take a more active stance against disinformation. An example of the stronger attitude towards false information came on March 31 2020, when Facebook, Twitter and YouTube all banned videos from Brazil’s President Jair Bolsonaro through which he advised the public how to treat the novel coronavirus by taking in an antimalarial, chloroquine. Rumors about the magical quality of chloroquine were spurred on earlier by President Trump’s voicing support for the usage of the medicine. The chloroquine controversy represents a type of disinformation that is simple. Doctors do not advise people to take the cure to treat or prevent the novel coronavirus, and so anyone saying otherwise is clearly spreading disinformation. Yet the most insidious information being spread about the coronavirus is not so easily stopped.
So far Facebook has had the most clear-cut policy on COVID-19 misinformation. It relies on third-party fact-checkers and health authorities flagging problematic content. It also blocks or restricts hashtags that spread misinformation on its sister platform, Instagram. Among social platforms Twitter and YouTube have taken less decisive positions. Panic-producing tweets claimed prematurely that New York was under lockdown, and bots or fake accounts have slipped in rumors. The widely read @realDonaldTrump has tweeted misinformation. Elon Musk, founder of Tesla and SpaceX, tweeted a false assertion about the coronavirus to 32 million followers and Twitter has declined to remove his tweet. John McAfee, founder of the eponymous security solutions company, also tweeted a false assertion about the coronavirus. That tweet was removed but not before it had been widely shared. On the other hand, YouTube removes videos claiming to prevent infections. YouTube has taken the approach of pairing misleading coronavirus content with a link to an alternative authoritative source, such as the Centers for Disease Control and Prevention (CDC) or World Health Organization (WHO). However, a video from a non-authoritative individual with the CDC or WHO logo attached could unintentionally give viewers the impression that those public health authorities have approved the videos.
All three companies have offered free ads to appropriate public health and nonprofit organizations. Facebook has offered unlimited ads to the WHO, while Google has made a similar but less open-ended offer. Twitter offers Ads for Good credits to fact-checking nonprofit organizations and health information disseminators. The social media companies have attempted to adjust to the situation and respond quickly to the Corona virus threat but they can do more. They could use the moment to rebuild trust with the public and with regulators. However, neither of these companies has a transparent blocking policy founded on solid fact-checking. It is essential that they hold their users’ attention and influence positive behavior in response to COVID-19.
In order to battle disinformation Facebook, Instagram and WhatsApp are now incessantly publishing top tips from the WHO, in many cases whether they like it or not. Until recently not many people would have contested the claim that the WHO is the ultimate global authority on such matters.  It is the famous name that supports the great image of the organization but the tendency of its senior leadership to flatter China in its communications may quickly undermine trust in it.
Some governments have also taken seriously the task to stop misinformation in a pandemic situation. The BBC reported that the UK government, for instance, is cracking down on false information in the form of “a rapid response unit within the Cabinet Office [that] is working with social media firms to remove fake news and harmful content.” There is no definition of ‘harmful content’, but the government seems worried that people could die as a result of being misinformed. Such general rules based on vague notions may prove risky for freedom of expression and information. 
The EU has also announced that it is working in close cooperation with online platforms to encourage them “to promote authoritative sources, demote content that is fact-checked as false or misleading, and take down illegal content or content that could cause physical harm.” The European Commission suggests people follow the advice of public health authorities, and the websites of relevant EU and international organisations: ECDC and WHO. 
The EU is also debunking false assertions stressing that “there is no evidence that 5G is harmful to people’s health”. It also adds that while the EU actively promotes vaccines to guarantee public health, there are no plans rooted in the Corona pandemic to impose mass vaccinations. On the other hand, there are plenty of people spreading unscientific anti-vaccine appeals. These calls prey on emotions and fear and may cause significant harm to public health and safety.
Compiled by Media 21 from:

International community against disinformation in the situation of COVID-19 pandemic

While the corona virus crisis is escalating, the U.N. Secretary-General Antonio Guterres made the admonition that the world is not only immersed in a pandemic but is also facing “a dangerous epidemic of misinformation” about COVID-19. He announced a U.N. campaign based on scientific knowledge to counter what he called “a poison” that is putting lives at risk. The idea behind this campaign is to flood the Internet with facts and scholarly arguments and debunk a global “misinfo-demic” that is spreading harmful health advice, so called “snake-oil solutions,” falsehoods, and conspiracy theories. In addition, Guterres is urging social media organizations to do more to counter misinformation and to “root out hate and harmful assertions about COVID-19.” The U.N. Secretary-General stressed that “mutual respect and upholding human rights must be our compass in navigating this crisis.” The role of journalists and fact-checkers to analyse and debunk heaps of misleading stories and social media posts is crucial in this respect.
Of key importance in all crisis situations is trust in science and in institutions “grounded in responsive, responsible, evidence-based governance and leadership.” Because of the scale of the problem with medical disinformation, the World Health Organization (WHO) has added a “mythbusters” section to its online coronavirus advice pages. The section refutes a staggering array of myths, including claims that drinking potent alcoholic drinks, exposure to high temperatures, or conversely, cold weather, can kill the virus. To ensure that accurate information and advice is available and take steps to inform the public when inaccurate information is published is UNICEF’s statement from 9 March 2020 on misinformation about the coronavirus in which it declares publicly its intention to "actively take steps to provide accurate information about the virus by working with the World Health Organization, government authorities and online partners such as Facebook, Instagram, LinkedIn and TikTok.”  
Technology such as AI is currently in use to counter corona virus disinformation. Publications stress that data science and AI can be effectively used to confront the disease. The AI contribution can be manifold in this direction, spanning the search for cure, knowledge sharing, tracking the spread of the virus, and assisting healthcare personnel as well as controlling population. 
The EU has unconditionally declared the fight against disinformation a joint effort involving all European institutions. Despite the links to its chief bodies, the EC advises European citizens to also follow the EUvsDisinfo website. The EUvsDisinfo is the flagship project of the European External Action Service’s East StratCom Task Force. The project was established in 2015 with the purpose to improve forecasting and to address and respond to the Russian Federation’s ongoing disinformation campaigns affecting the European Union, its Member States, and countries in the shared neighbourhood. The EUvsDisinfo’s weekly newsletter, the Disinformation Review, summarises the main pro-Kremlin disinformation trends observed across the disinfo cases collected weekly, and includes the latest news and analysis. It is available in English, Russian, and since October 2019 in German. Currently, the newsletter compares the cases on the coronavirus pandemic, published in a given period, and extracts and debunks the main disinformation ideas being distributed. Two of the most common narratives are that the US created the coronavirus and the EU together with the border-free Schengen area is failing to cope with the crisis and is disintegrating as a result. In particular, the narrative of failure and lack of EU solidarity is trending after the delivery of Russian aid to Italy and it can be encountered in 26 disinformation cases collected between January and March 2020. The narrative that the virus is being used as a weapon against China and its economy is emphasized in 24 cases. The rather creative notion that the whole coronavirus crisis is a secret plan of the global elite is present in 17 cases. The most malevolent message coming from all these cases is that authoritarian regimes are best at handling disasters.  However, authoritarian regimes, which tend to control and manipulate information and to limit the freedom of doctors and scientists to engage in international cooperation, are often an obstacle to the timely detection and containment of epidemic outbreaks. A clear example is China’s deliberate cover-up of the early days of the coronavirus outbreak in Wuhan. The real way out of the coronavirus pandemic (and to future epidemic outbreaks) is not to revert to “closed societies” but to develop a global response and rely on broad collaboration.
It is worth referring to UNESCO’s experience against the background of all international efforts to combat disinformation, including racist or xenophobic disinformation. The position of the organisation is that governments, in order to counter rumours and lies, should be more transparent, and proactively disclose more data, in line with Right to Information laws and policies. Access to information from official sources is very important for credibility in crisis situations. In times of tension and difficulties, people should become more critical of that which is being presented to them online and elsewhere. UNESCO is using the hashtags #ThinkBeforeSharing, #ThinkBeforeClicking, and #ShareKnowledge, and promoting the view that the rights to freedom of expression and access to information are the best remedies to the dangers of disinformation. These rights enable governments and the public to make reasonable decisions and responses that are founded on both science and human rights values. 
Compiled by Media 21 from
https://abcnews.go.com/Health/wireStory/chief-world-faces-misinformation-epidemic-virus-70148613  
https://moderndiplomacy.eu/2020/04/15/during-this-coronavirus-pandemic-fake-news-is-putting-lives-at-risk-unesco/  
https://www.coe.int/en/web/artificial-intelligence/ai-and-control-of-covid-19-coronavirus

https://euvsdisinfo.eu/about/

Social media regulation - a proposition for a social media arbitration mechanism

Dr. Bissera Zankova, Media 21 Foundation
Dr. Valeriy Dimitrov, professor at the Legal Faculty of the University of National and World Economy, Sofia


1. Introduction
The issue of how to regulate social media platforms, including social networks, is gaining momentum among stakeholders. It is not an exaggeration to state that sometimes the arguments in favour of the regulatory option turn into regulatory obsession based on the claim that social platforms have a dramatic impact upon our lives and the lives of future generations. In these efforts, some specialists discern attempts to impose “overregulation” on social media without solid guarantees for freedom of expression and freedom of enterprise.
No doubt the impact of social networks is paramount today but just such an idea was also considered about the impact of broadcasting during the last century, provoking similar discussions. However, one cannot be sure how media landscape will evolve in the upcoming years and how or whether at all social media giants will maintain their powerful positions. Our purpose here is not to make a review of the opinions concerning Internet intermediaries’ regulation but to build on some ideas and suggest a practical solution for good social media regulation that does not affect freedom of expression and freedom of private undertaking. 


The OECD Observer emphasizes “it is one thing to have regulation, it is quite another to have good regulation.” Smart regulation efforts in the EU, for instance, aim at reducing regulatory burdens in EU legislation. The objective is to make European business activities easier and to contribute to growth and strengthened competitiveness on the EU’s Single Market. In the same vein the new OECD report “Better regulation practices across the EU” (https://www.oecd-ilibrary.org/sites/9789264311732-n/index.html?itemId=/content/publication/9789264311732-en) says that “regulatory policy is one of the main government policy levers for improving societal welfare. It must not only be responsive to a changing environment, but also proactively shape this environment. It is also important to engage citizens and all stakeholders in the development of laws.” The ten point plan for EU smart regulation suggested by UK back in 2012 and supported by twelve other member states drew attention specifically to alternatives to EU-regulation (https://www.gov.uk/government/publications/10-point-plan-for-eu-smart-regulation). 
By and large good regulation means in our view a well thought out and effective model of regulation, non-intrusive and unbiased, which can reconcile different conditions and requirements. Ideally, we should consider that better regulation practices enhance both the life of citizens and business.


2. Social media regulation – a brief overview of the most recent sources
Recently, various ideas regarding Internet intermediaries’ regulation have been thrown into the public space, expanding the debate between more liberal and more conservative minds.
Most experts claim Internet intermediaries cannot self-regulate or regulate properly their platforms. Such are, for instance, the inferences in the report on intermediary liability (“Intermediary liability 2.0. A shifting paradigm, https://sflc.in/intermediary-liability-20-shifting-paradigm). The report discusses the complexity of contemporary online communication by analyzing a variety of legal and journalistic sources. Some of the conclusions in the report agree that “as these platforms grew, it became increasingly difficult for them to self-regulate the large volume of content flowing through their pipelines. The misuse of data available on platforms, coupled with the growing menace of disinformation and misinformation online, increases calls for imposition of greater liability on intermediaries for third party copyright infringemen. Access assistance to law enforcement agencies and the rampant harassment and abuse of women and other vulnerable groups have highlighted the failures of these tech companies in regulating their channels.” The report deals with intermediary liability practices in India, in particular, which are rooted in a law showcasing a comprehensive and broad definition of intermediary, the intermediaries’ liability rules and the abundant case law of the Supreme Court of India. In 2018, the Draft Information Technology [Intermediaries Guide-lines (Amendment) Rules] (“Draft Rules”) was proposed by the government to fight ‘fake news’, terrorist content and obscene content, among others. These new rules placed more stringent obligations on intermediaries to pro-actively monitor content uploaded on their platforms and enable traceability to determine the originator of information. This serves as an example that governments are striving to implement regulations that can effectively combat the new challenges on platforms. However, these attempts raise hard questions concerning predominantly the acceptable limits on freedom of speech on the Internet. In 2017, in a ‘Joint declaration on freedom of expression and ‘Fake News’, disinformation and propaganda’, United Nations Special Rapporteur on Freedom of Opinion and Expression, David Kaye, stated that “general prohibitions on the dissemination of information based on vague and ambiguous ideas, including “false news” or “non-objective information”, are incompatible with international standards for restrictions on freedom of expression, and should be abolished.”(https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=21287&LangID=E)


In its final report on disinformation and fake news, alongside human rights protection, the UK House of Commons, Digital, Culture, Media and Sports Committee recommended expansion of digital literacy and greater transparency of social media companies (https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/179102.htm).
In a recently published book (The Social Media Upheaval, 2019, Kindle edition), G. H. Reynolds shares his concern that “to police content of social media speech beyond a very basic level of blocking viruses and the like is a bad idea.” https://granta.com/nadine-at-forty/?fbclid=IwAR2OW80VqCtiw8yGFsvxwhc6tT9X4Z-IThZ5jaDT8npdb1PmtYxjAX6Xg8Uhe the idea being that the more involved and granular the policing becomes, the more it will look like censorship, “which is what it will inevitably become”. Better according to Reynolds is to police collusion among platforms, i.e., to apply antitrust scrutiny. As the pressure for regulation will inevitably grow, it is better to regulate in a way that preserves free speech and does not empower additionally tech oligarchs.


Interesting proposals about concrete legal actions are furnished by another acclaimed report that tackles the implementation of national laws online and cross border legal challenges (Internet and jurisdiction. Global status report 2019. Key findings, https://www.internetjurisdiction.net/publications/paper/internet-jurisdiction-global-status-report-key-findings). The authors reach the conclusion that “the regulatory environment online is characterized by potentially competing or conflicting policies and court decisions in the absence of clear-cut standards. The resulting complexity may be detrimental on numerous levels and creates “high levels of legal uncertainty in cyberspace”. 
Regulation on the net and especially social media regulation represents one of the many intertwined problems brought forth in digital reality. Apparently, efficient solutions to Internet governance issues and working jurisdictional decisions can create the necessary safe and free environment that will allow regulation to produce tangible results.


As a conceptual basis of our paper we shall use the libertarian theory of economic freedom because in our understanding it permits a future-oriented, just human rights based-on innovation, encouraging regulation to be created. That is why we turn to the publications of the renowned Cato Institute which has published a series of articles discussing intermediaries’ liability from a libertarian perspective. What is important about such approach is that it makes possible for policy makers to elaborate frameworks that protect freedom of enterprise online without touching on freedom of expression. Further in our discussion, though it explicitly states that its focus is primarily on potential policies for USA, we shall outline some of the points in the article, “Why the government should not regulate content moderation of social media” by John Samples (https://www.cato.org/publications/policy-analysis/why-government-should-not-regulate-content-moderation-social-media#full), as some insights discussed are of more universal nature.


3. The libertarian approach to social media – what is Cato institute’s opinion of social media regulation?
Tom Standage, deputy editor of The Economist, thinks two features of social media stand out - the shared social environment established on social media and the sense of membership in a distributed community in contrast to publishing. In addition the Cato article underlines the fact that social media represent an economic institution that has “to generate revenue beyond the costs of providing the service.” However, each group of people involved: users, consumers, advertisers and managers are related to speech and their relationships create “the forum in which speech happens” and that is why concerns about speech on social media are central to any regulatory effort. Similarity to publishers may prompt policymakers to hold social media companies liable for defamation but that is not the case in the US due to section 230 of the Communications Decency Act (CDA), which explicitly exempts social media platforms from liability by stating that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. The aim of the Congress was to encourage unfettered expression online, to further economic interests on the Internet and to promote the protection of minors by making interactive computer services and their users self-police the Internet for obscenity and other offensive materials. 


It is interesting to clarify the stand the US Supreme Court has taken towards private forums of speech during years. In this direction, John Samples points out that “the history of public values and social media suggests a strong presumption against government regulation. The federal government must refrain from abridging the freedom of speech, a constraint that strongly  protects a virtual space comprising speech.” The government has also generally refrained from forcing owners of private property to abide by the First Amendment. The conclusion is that “those who seek more public control over social media should offer strong arguments to overcome this presumption of private governance.”


Other arguments supporting the principle of free, private initiative can also be found in the article. One of the more important questions is whether big tech companies enjoy a monopoly position. Although a few tech companies dominate some markets, that does not mean these firms are leaders for good and can never be displaced. Samples cites David S. Evans, Matchmakers: The New Economics of Multisided Platforms (Cambridge: Harvard Business Review Press, 2016, Kindle edition) who explains that due to the contemporary network effect, previously dominant firms are declining rather than continuing as monopolies:
 “Victory is likely to be more transient than economists and pundits once thought. We have reasons to doubt that these firms will continue to dominate their markets.”


In addition, it is not certain that governmental regulation will produce more competition in the online marketplace of ideas. It may simply protect both social media owners and government officials from competition. According to economist Thomas Hazlett, when FCC carefully planned the structure and form of television service in the last century, it also severely limited the number of competing stations, which resulted in the soaring value of the licenses. Hazlett also quotes an expert who claims that “the effect of this policy has been to create a system of powerful vested interests, which continue to stand in the path of reform and changes.” In our opinion, nobody wishes this system to be perpetuated on social media today.
Terrorism, disinformation and hateful speech can be seen as strong grounds for governmental regulation of social media. However, John Samples stresses the fact that American courts have consistently refused to hold social media platforms liable for terrorist acts. In Fields v. Twitter (Fields v. Twitter Inc., 2018 WL 626800 (9th Cir. Jan. 31, 2018) and similar cases, plaintiffs failed to demonstrate that ISIS’s use of Twitter played an instrumental role in the attacks against them. Though they cannot be seen as uniquely instrumental in the realization of terrorist plans, any standard of liability that might implicate Twitter in terrorist attacks can prove to be overbroad (and inconsistent with the First Amendment or with any legal standard of certainty) and also encompass other services that are frequently used by terrorists.  On the other hand, public social media provides opportunities for counterspeech and intelligence gathering. Samples recalls that sometimes state security services have asked social media platforms to refrain from removing terrorist accounts, as they provide valuable information concerning the aims, priorities, and sometimes the locations of terrorist actors. 
There can be two other potentially compelling reasons for government action preventing the harms caused by “fake news” and “hate speech.” The terms may prove vague, and their use may lead to legal confusion. The term “fake news” has come to public agenda relatively recently and different definitions have been put forward including variations as mis-, dis- and malinformation with their respected consequences. In United States v. Alvarez, the court refused to recognize a general exception to the First Amendment for false speech: “The Court has never endorsed the categorical rule the Government advances: that false statements receive no First Amendment protection.” United States v. Alvarez, 567 U.S. 709 (2012). 
In conclusion, Samples considers  social media moderation to be more effective than the increases in government power in such cases. The companies that are among the most successful American companies are technically equipped and far more capable of dealing with instances of dangerous speech. Samples’ suspicion is that “government officials may attempt directly or obliquely to compel tech companies to suppress disfavored speech,” which may result in “public-private censorship”.
While in Europe scales tip towards more regulation and additional requirements for social media platforms, including the threat of huge fines being imposed. In cases of allowing illegal expression, the implementation of the agreed Code of Conduct against harmful content online has not produced the expected results to the full. Concerning fake news, the Commission suggests a complex of measures but still considers that self-regulation can contribute to policy responses, provided it is effectively implemented and monitored. Actions such as the censoring of critical, satirical, dissenting or shocking speech should strictly respect freedom of expression and include safeguards that prevent their misuse. They should also strictly respect the Commission's commitment to an open, safe and reliable Internet. (https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018DC0236&from=EN)
Regarding hate speech there is no universally accepted definition across Europe or the rest of the world of what constitutes hate speech. The European Commission intensified its work on fighting hate speech. Following consultations with the leading social media companies (Facebook, Twitter, YouTube and Microsoft), the EC published a Code of Conduct including an array of public commitments, that was voluntarily accepted by tech giants aimed at fighting online hate speech. However, it is debatable whether the competent EU bodies and national authorities should impose censorship and public control, as long as “the EU's broad concept of "hate speech" covers many forms of expression which are varied and complex: therefore, the approaches must also be appropriately differentiated.” (Pana,2018 at http://www.mondaq.com/x/633648/Social+Media/EU+Steps+For+Fighting+Online+Hate+Speech+Possible+Censorship+Of+Social+Media).   
In 2018, the ECA proposed a new EU law requiring platforms to take down any terrorism-related content within an hour of a notice being issued.  The law additionally forces platforms to use a filter to ensure it’s not reuploadedShould they fail in either of these duties,  governments are allowed to fine companies up to 4 % of their global annual revenue. For a company like Facebook that could mean fines of as much as $680 million (around €600 million).This is widely proclaimed as necessary measure, though it is not without its opponents. Critics say that the instrument relies on an overly expansive definition of terrorist content, and that an upload filter could be used by governments to censor their citizens, while removing extremist content could prevent non-governmental organizations from being able to document human rights crimes in zones of conflict and tension. (https://www.theverge.com/2019/3/21/18274201/european-terrorist-content-regulation-extremist-terreg-upload-filter-one-hour-takedown-eu)
In our view, such governmental initiatives and the elusiveness of terms will always provoke protests from more libertarian oriented persons and groups. 
4. Establishment of an arbitration mechanism at social media platforms
We now come to the crux of our work to propose an internal body that can practically resolve disputes among participants and between participants and the social media platform. Such a body can also support the effective application of the media codes of conduct without governmental involvement. Social media serve as organizations that provide a space for the creation and exchange of information among a huge number of users performing as intermediaries or organizers of an information forum. They cannot be held responsible for the content of the information created and exchanged by third persons; however, since they facilitate debate, they should take steps to settle properly disputes related to the debate. 
We have to distinguish the roles of interested parties in this process. Within the sovereignty of states, in order to protect citizens, the obligation to defend national security and counter terrorism lies within the scope of states. In such cases, governments can adopt special laws protecting high public interests based on internationally recognized principles. States can also adopt multilateral conventions supported by enforcement mechanisms (as in the case of money laundering, cyber crime, drug trafficking, trafficking in human beings, etc. legislation). The elaboration of these pieces of legislation and conventions should be transparent, based on shared human rights principles and include the efforts of various stakeholders. Outside these legitimate interests, it is not justified states to impose burdensome administrative requirements on platforms, to curb freedom of private entities and meddle in business. Regulatory measures have to abide by the proportionality test the first part of which represents the principle of minimal impairment of the right or liberty. The attempts of a number of nation-states to set controlling, even censoring functions on social platforms, generate problems related both to the right to freedom of expression and the right to free initiative. On the one hand, government interference can suppress certain types of speech and have a chilling effect on expression in general or affect the economic independence of companies. Yet, on the other hand, there are disputes between the participants in the information forum, as well as between the participants and the social media concerning content, and accordingly with claims for the removal of harmful and offensive content in which the state should not step in. 
A possible solution to these issues can be the establishment of an arbitration mechanism (tribunal) for resolving disputes through its institutionalization by the social media themselves. Inspiration for this idea was introduced by UNCITRAL Model Law on International Commercial Arbitration (1985), with amendments as adopted in 2006. The purpose of  the Model Law is to entrench modern, fair, and harmonized rules on commercial transactions and to promote the best commercial practices worldwide. The law is designed to assist states in modernizing their laws on arbitral procedure. It reflects universal consensus on key aspects of international arbitration practice having been accepted by states of all regions and systems.(https://uncitral.un.org/en/texts/arbitration/modellaw/commercial_arbitration) According to eminent Prof. Roy Goode “arbitration is a form of dispute resolution in which the parties agree to submit their differences to a third party or a tribunal for binding decisions.” (Commercial law, third edition, 2004.Lexis/Nexis, UK and Penguin books).
4.1 The model of stock exchange arbitration mechanism
Arbitration tribunals being institutionalized units of private, non-governmental adjudication are inherent in such self-governing and self-regulating business organizations such as regulated markets for securities and other financial instruments. The most typical representative of these markets is the stock exchange. A stock exchange represents a club organization based on membership of securities traders. The stock exchange creates and enforces rules that regulate both the membership and the trade. Disputes shall be settled by special arbitrators organized at the stock exchange arbitration tribunal (court). The membership of the club is contractual and it is mandatory for any member to accept and abide by the so-called “arbitration clause”. The clause requires any dispute regarding financial instruments trading and club membership to be decided by the listed arbitrators chosen by the parties accordingly. The arbitrators included in the public list are persons of high professional and moral standing. The stock exchange itself is not responsible for the arbitration decisions, since it is often involved in the disputes. The costs of the arbitration decisions (awards) shall be borne by the parties to the dispute. It is also a principle that the dispute settlement rules are created by the stock exchange itself.
4.2 Social media and the arbitration model
Social media is a business and club-like organization (see the opinion of Tom Standage on p. 3) and its rules are binding for the participants in the information forum. In this sense, it can be viewed as an institution similar to a stock exchange. This similarity allows the transposition of the arbitration model to social media and the setting up of such unit at social media platforms. Exchange underpins the operation of both entities (in the one case it is about exchange of information and ideas, while in the other it is about exchange of special goods such as securities and financial instruments) and their organization is rooted in the principle of membership of participants (terms and conditions acceptance). In the context of this similarity, the specific features of the stock market and of social media cannot be an obstacle to the establishment of an arbitration tribunal at the social media platforms. Arbitration is initially a mechanism for adjudication of commercial disputes but at the stock exchange traders represent many non-commercial persons. The users of social media services also comprise numerous non-commercial persons. In our view, there is no fundamental obstacle to using this method by non-traders, if there is a contractual agreement for its implementation. The terms and conditions can bind users of their services through the incorporation of an arbitration clause. 


By the arbitration procedure disputes about the content of the information on social platforms could be resolved in an impartial and professional manner by unbiased and professional arbitrators selected by the participants themselves. These arbitrators should be recognized media lawyers and professionals with high personal integrity.
The arbitration process for resolving disputes is significantly faster and cheaper than litigation. We shall quote again Prof. Goode who stresses that due to its “consensual nature the arbitration mechanism avoids unnecessary delay or expense” (Commercial law, third edition, p. 1174 – 1175). 
Arbitration cases are in principle one-instance cases and in exceptional and rare instances only a court can challenge the arbitration awards.
Renowned Professors Loss and Seligman draw attention to the fact that under US securities’ legislation “courts have limited power to review arbitration awards (at the stock exchanges – B.Z., V.D.) on such grounds as an award being made in “manifest disregard of the law”, or its being “completely irrational”, or “arbitrary and capricious”. A court can also void an arbitration agreement if it finds that there was fraud in the inducement of the arbitration clause itself.” ”(Loss, L. & Seligman, J. Fundamentals of securities regulation, third edition.1995. Little, Brown and Company. Boston, New York, Toronto, London, p. 1139).  Therefore the court is not completely isolated in the process but can interfere to protect parties’ interests in exceptional cases when the arbitration threatens the stability of the legal order. 
The arbitration settlement of disputes can consolidate the mediating function of social media and liberate them from the tasks of censors and controllers of content imposed by legislation in some countries.
The adoption of an arbitration clause may restore public trust in social media and their capability to self-regulate. 
The recognition of this method by the national states on whose territories the social media operates may be accomplished either by the adoption of appropriate legislation or by concluding multilateral international treaties.
The logic of creating and implementing such a model requires as a first step an arbitration unit to be established in nation states where social media operate. The arbitration institutionalization depends on the creation of a representative office in the territory of each state in which arbitration units can be set up. 


5. Conclusion
The proposition of an arbitration model of settling disputes at social media platforms comprises an approach that assures a wide space for self-regulation of social media.  It can better safeguard both freedom of expression and free business initiative. At the same time, this model is also a form of media protection against unjustified and arbitrary state regulatory interventionism, which may easily jeopardize freedom of expression and economic freedom.
It is commendable for social media to organize and try out the form of dispute settlement offered here, and establish and follow good practices in this regard. One should recall that the UN Guiding principles on business and human rights (2011) require”business enterprises should establish or participate in effective operational-level grievance mechanisms for individuals and communities who may be adversely impacted.”  (https://www.ohchr.org/documents/publications/GuidingprinciplesBusinesshr_eN.pdf)
These mechanisms should be people-centred, easy to implement and generate mutual trust. It is worth remembering the advice of the ECtHR that “the Internet is an information and communication tool particularly distinct from the printed media, especially as regards the capacity to store and transmit information. The electronic network, serving billions ofusers worldwide, is not and potentially will never be subject to the same regulations and control.” (WÄ™grzynowski and Smolczewski v. Poland (2013) and Editorial Board of Pravoye Delo and Shtekel v. Ukraine (2011) Therefore stake-holders have to discuss various options.