Search This Blog

Monday, April 6, 2020

Cyber-criminals


Every year, Internet and cybercrime experts' forecasts are met with dubious feelings by consumers: digital apocalypse seems distant while the chances of being safe online are real. However, almost all forecasts eventually come true. In the near future the main problem for humanity will be deep fake technology.  On Jan 7th 2020 CNN announced that Facebook had taken steps to curb deepfakes. The social media declared it would remove videos that have been manipulated using artificial intelligence or machine learning to make it appear that the subject has said words they did not. Such measure was far from a comprehensive ban for deepfakes, carving out exemptions for parody or satire, and videos that have been "edited solely to omit or change the order of words." Reacting to this, specialists generally welcomed Facebook's attempt to create a coherent policy on misinformation but some of them raised also critical comments the new policy was "too narrowly construed." It remained unclear why the company had chosen to focus on deepfakes only and not on the broader issue of intentionally misleading videos.
This news revived the broader discussion about deepfake technology and its deleterious effects for society. The algorithm previously used for fake porn videos, according to experts, can not only ruin the reputation of politicians, but even lead to a world war. Several years ago, enthusiasts tried to create a video featuring Donald Trump and Vladimir Putin presenting plans for armed attacks. Now, this is complicated by the fact that almost anyone can prepare such video: you can find ready-made online services that create fakes online with mechanisms being constantly refined to produce realistic videos and audio sequences.
The technology can be used for pseudocumentary videos that attribute non-existent speeches or even actions to politicians. For the first time it was discussed in April 2018 when American comedian Jordan Peele created a video entitled "You Won't Believe What Obama Says in This Video!" In it, the actor, who is also a screenwriter, "forces" the former US president to insult Donald Trump.
"Dangerous times have come. In the future, we need to be more vigilant about what we can trust on the web. This is a time when we have to rely on reputable news sources we are confident in" Peele concludes speaking through Obama's image. Deepfake algorithms are likely to be able to replicate the fate of fakenews which is highly popular across social networks.
Western experts fear that fake videos may play an important role in the upcoming US electionя.
"Deepfake can be of interest to anyone who has access to a computer and a network and who is interested in influencing elections" John Villasenor, professor at the University of California Los Angeles, argues. According to him, deep fake technology will become a powerful tool in the hands of those who want to misinform the masses and directly influence the political process. To do this, you don't even need to hack "smart" voting machines (though recent studies show that they are quite vulnerable). Experts believe that such content has two possible conversions. Firstly, a falsely crafted fake can convince a huge mass of online users (including voters) that a fictitious event is real. A similar incident happened in May 2019 when a video changing the speech of House Speaker Nancy Pelosi in a manner that made her look confused and inappropriate for the post appeared on Facebook. Even Trump's personal lawyer, Rudolf Giuliani, did not notice any falsification in this video published on the Web.
The second, even more dangerous result from the spread of counterfeits is audience fatigue. If a large number of videos appear on the web aimed at undermining the reputation of certain politicians or public figures, most network users will find it extremely annoying to differentiate the truth from the lie: they will become more cynical and apathetic and hence they may not go to vote or even be disappointed with politics forever.
The same thing can happen with any phenomenon if it is immersed in forgeries. According to a study conducted by the renowned Pew Research Center, 68% of Americans believe that false content affects the confidence of fellow citizens, and 54% believe that fakes completely destroy people's trust. Half of the respondents consider fake news to be a major problem in the modern world, followed by racism, illegal immigration, terrorism and sexism.
Some states have already passed laws prohibiting the use of deepfake technology to interfere with elections. In June 2019, a bill was introduced to the US Congress to combat the spread of misinformation through restrictions on counterfeiting technology. The dangers generated by deepfake go beyond the domestic policies of any country: they can also appear in international relations to accuse the heads of other countries of what they have not done or to portray military operations that have never happened.
 A recent report of the Center for Business and Human Rights at Stern University in New York warns such attacks can be expected in 2020 from China, Iran and Russia. According to analysts, instead of hacking the network used in cyberwarfare, the conquest of people's brains on social networks is achieved through the authority of likes, promoting powerful lies within the masses.
You can now create such videos completely freely and with impunity while the legitimacy of such works is questioned only in policy discussions. Moreover, it is almost impossible to completely ban this content creation method: it is actively used in the entertainment field and has great potential in the film industry, for instance. Analysts suggest that in the future deepfake videos will simply be tagged in a special way.
Deepfake technology has another dangerous feature: it can help attackers deceive face recognition systems. How to deal with this problem is not yet known. Facebook employees are trying to solve the opposite: they have trained artificial intelligence to lie to similar systems and will use this method to detect counterfeits.
Another negative factor is the changes in the market of professional hackers. Back in 2018 McAfee officials suggested that in the future, cybercriminals would more often cooperate with each other to make attacks more powerful and dangerous. Their predictions were justified: recently, criminals had successfully launched targeted attacks using already compromised computers. Initially one group infected a large number of malware (or configured remote access protocols on them), and then another group, based on that, spread ransomware viruses around the world.
Some time ago Europol published a report on organized cybercrime which stated ransomware would become a major threat in 2019 for both ordinary users and countries. Most likely, this trend will continue in 2020 and the space of the criminal market will grow. In a hacker environment the demand for corporate networks compromised by "colleagues" has increased dramatically - they penetrate systems with only one move and sell access to them. Often a hacker group receives an "order" about a particular network. Such scheme is called access as service or hackers either sell or lease access to infected infrastructure.
According to analysts if attacks become more elaborate this will result in multistage cyber-campaigns for extortion in the future. Firstly, victims will be attacked by ransomware viruses, and then fraudsters will demand ransom for the captured data. After the payment, however, the victims will not be able to get rid of the burglary: criminals will blackmail them and demand payment for the non-disclosure of confidential information that previously fell into their hands during the attack.
Another fruitful area for hackers’ intervention is industrial espionage. The hacking attacks that took place in 2019 and the unprecedented rapid take-up of complex networks had shown that most public and private companies are likely to be compromised for a long time. In the past hackers lurked and waited for the right moment to strike in order to gain access to large networks and systems. The situation has become more complicated for both parties in recent years due to the shift of the company's infrastructure to the cloud. Such process is blurring the boundaries and now it is quite difficult for cyber criminals to carry out a targeted attack and at the same time it is very hard for companies (if the attack does occur) to recognize the attack early and investigate the incident.
Such attacks began to cost criminals a lot more and eventually they turned to social engineering methods. The latter comprise phishing attacks against company employees and looking for insiders within the organization. One can attract insiders in several ways: place an ad directly in the forum, promising a reward for confidential information, blackmail a current employee in exchange for data or services, or simply mislead her. For example, fraudsters may offer an employee of the corporation to do additional simple work related to the use or transmission of classified information - this may be financial information or even personal information. In this case it is likely that the criminals themselves will assure their new "agent" does not violate any confidentiality requirements.
Phishing will remain one of the most effective tools for hackers in the years to come: studies signal at least one out of 99 email messages is dangerous. The victims of the attack are not only employees of large companies who deal with online criminal activities but also ordinary network users. Experts urge consumers to be vigilant and to not trust unreliable recipients.
Another consumers ‘  vulnerability  that criminals can benefit from is mobile applications: from banking programs that can steal a victim's funds, to fake miners or clickers that secretly use the resources of the seized device. All of them will continue their activities in 2020. Data leaks are becoming more frequent: not only freshly stolen databases but also compilations of previously obtained data will be sold on the black market. In many ways users’ habit of  reusing passwords will contribute to this: such accounts are more vulnerable to hacking. Cybercriminals use lists of compromised usernames, email accounts and passwords from previous databases to gain access to other sites. As a result password reuse automatically puts the user at risk.
Artificial intelligence (AI) is penetrating various technological fields and is of great concern to cybersecurity researchers. Their opinion is hacks and attacks using AI are only a matter of time. "If we don't see this before New Year's Day, then beyond any doubt  in 2020 the first cyber-attack using artificial intelligence will happen", Marcus Fowler, director of strategic threats at Darktrace predicts. According to him adaptable and constantly learning software that may exist unnoticed for as long as possible will soon demonstrate its potential and start up a race for cyber weapons. The introduction of 5G systems will only contribute to this growth - the number of automatic attacks will increase and the speed of their deployment will soar at times. With wireless speeds up to 100x faster than on 4G networks, malware can download and distribute to the victim's networks so quickly that they will soon not realize that something has gone wrong. In addition, the 5G networks themselves are designed in such a way that they can be used to create new types and methods of attack.
Experts are confident that risks will continue to increase exponentially - the more they are, the stronger and faster the growth. Security guards will need to update security standards and come up with new ways to block threats: if this does not happen, then the vast majority of companies and consumers will soon fall into the power of cybercriminals.
Compiled by Media 21 Foundation (2020) from  
Media 21 Foundation expresses its gratitude to the experts from MTITC of Bulgaria for the preparation of the first version of this material.
Compiled by Media 21 Foundation (2020) from  
Media 21 Foundation expresses its gratitude to the experts from MTITC of Bulgaria for the preparation of the first version of this material.

No comments:

Post a Comment