top of page

The Deep Dark Web: Creating a Safer Digital World in the UK

By: Sarah McVeigh


There is an increasing global urge to address the difficult issues connected with internet use such as: who determines what is and is not acceptable; how can improving technologies prevent cyber-crime; who should be held responsible for removing harmful online content; and how these matters can be fairly enforced. The use of internet is a complex issue that has gained much debate in the legal literature on technology law. This widespread area of law directly affects the digital economy while intermingling with the core freedoms.


Recent legal developments


In April 2019, the UK published the White Paper on Online Harms (the white paper). The stated aim was to make the UK the safest place in the world to be online and the best place to start and grow a digital business. With the rise in illegal and harmful content published online, the White Paper addresses the need for a new regulatory framework to improve citizens online safety. The outcome is hoped to rebuild public confidence and set clear expectations of companies, allowing citizens to enjoy more safely, that online services have to offer. The government’s response to the consultation was published in February 2020.


On a wider scale, the EU has a similar focus. The new President of the European Commission committed to upgrading the EU’s liability and safety rules for digital platforms, services and products with a new Digital Services Act (the Act). In April 2020, the Committee on the Internal Market and Consumer Protection (IMCO) published a draft report with detailed recommendations on what the Act should contain. This draft report focuses on fundamental rights, consumer protection and artificial intelligence.


The battle towards a safer digital world has not solely been tackled by the government as technology platforms are also engaged in this battle. Facebook published its White Paper on online content regulation in February 2020. In similar ways, Facebook addressed the need for a new regulatory framework to ensure companies are making decision about online content in a way that minimizes harm but also respects the fundamental right to freedom of expression.

The problem


To summarily define harmful online content is somewhat difficult as it is an area that is so vast. At one end of the spectrum there is the criminal material, for example, the live streaming of terrorist exploitation or the prolific amount of child sexual abuse content or even coding Malware to be destructive in that it causes system crashes. However, the concept of online harms goes far beyond this. The dangers of disinformation (so-called fake news) apply both at an individual and a national level, with the potential to undermine national values and principles. Fake news is a good example of the difficulties that harmful, as opposed to illegal, content poses. Legal remedies are available for defamatory content, hate speech and incitement to violence. However, fake news goes wider than that to cover other deliberate distortions of facts. These are difficult assessments to make. The different approaches that the platforms have taken to fake news illustrate how private companies may not be best placed to decide what is or is not acceptable for society. Detection tools can only go so far. Even artificial intelligence natural language processing detection tools are not sophisticated enough to determine context and to differentiate between misleading fake facts and fake facts that are not misleading, or which may promote plurality, such as satire. If detection tools make these assessments and this results in an automated decision, that also raises ethical questions.


At present, some platforms engage human fact-checkers to carry out the context assessment. Content whose accuracy is disputed is labelled to flag it up to users as disputed. This reduces the propagation of that material. However, this is a subjective assessment and the sheer volume of traffic through the big social networks means that not all content can be checked. Policies and detection are not perfect but there are clear benefits. Facebook’s White Paper reports that from July to September 2019, the vast majority of harmful content that it removed for violating its policies was detected by its technology before anyone reported it. However, because of the lack of regulation and standardisation, approaches differ across platforms. The assessment of harm and the consequential actions that effectively amount to censorship are in the hands of private companies with their own commercial interests to determine.


Recently, this has been exemplified with COVID-19. During this current pandemic, people have been continually searching for information regarding the coronavirus infection. In many cases, people have unfortunately found themselves overwhelmed with news containing fake reports and misinformation, which, for those without the right skills, can be complicated to digest.

Striking the balance


Controversially, the internet can be a good place if used correctly. The internet is a digital space which facilitates communication and debate, it provides both entertainment and online education and, as seen throughout this past year, it also allows individuals to work from home. The current lockdown due to COVID-19 has signified the importance of online connectivity for business and indeed, for private life. Essentially, the internet allows for freedom of speech, including a free press, and pluralism. The government, and with the help of society, need to tackle the issue of online harms while maintaining the benefits of using online platforms.


The current remedy


The law already exists, and solutions are already deployed in some areas. As aforementioned, the sharing or viewing of images of child sexual abuse is typically a criminal offence. A lack of law or rights to freedom of speech do not make tackling this type of objectively harmful content difficult. Today, with the help of technology, companies can deploy detection tools to identify this, and it is taken down and reported to the authorities. However, the main issue with this is funding. The volume of content and the fact that the authorities are under-resourced to deal with what is reported make this a complex area to work in. It can be argued that more could be done by the technology companies in order to prevent such content being made public and available to users. Although, it is more complex matter when human rights are involved. Arguably, the platforms point out that imposing regulations could potentially drive perpetrators to less robust platforms and that to resort to pre-screening content engages fundamental rights such as freedom of speech.


Plans for the future


There is an obvious urgency that the internet needs more rules and regulations. The White Paper and the Act are steps towards that goal, however, much more needs to be done. Certain features of the new landscape are emerging; regulation and standards; transparency and reporting; detection and take-downs.


Overall, the use of the internet and how it should be regulated will result in the attempt to balance these features against core rights and freedoms. The big technology companies already have some of these features, and there is already law that makes specific harms illegal and sets out liability regimes for content hosts. What will be new is the more active engagement and oversight by policy makers and regulators, and the development of standards. It remains to be seen how active this involvement is in assisting the platforms to frame the problematic question of what constitutes unacceptable harmful content.

bottom of page