Online safety in the metaverse: what will platforms need to think about?
The metaverse is an evolving concept with slightly varying definitions. Despite the difficulty of defining the metaverse, it's clear that user to user interactions will form a key part of whatever develops.
Unfortunately, any platform relying so heavily on user interaction leaves the door open for bad actors to abuse the freedom they have been given to communicate with people who they may otherwise not have been able to reach. Although the number of these bad actors may be a tiny proportion of overall users, there is still considerable scope for a wide range of harmful behaviours and agendas to be progressed, including (but certainly not limited to) harassment of others, exploitation of the vulnerable and proliferation of dangerous content. This is not a new issue. Today's social platforms operating outside the metaverse are already having to contend with how to keep users safe. But the nature of the metaverse means that this is a trend which is bound to continue.
In light of the issues faced by existing online platforms, there is a growing desire among regulators, particularly in Europe, to impose a more formalised framework of responsibility on platform providers to require them to prevent access to harmful user-generated material. Some countries already have relatively robust laws in place to try to do this. For instance, Germany's Network Enforcement Act, or NetzDG, and youth protection laws have led the way in shifting the focus from the original content creator to the online platforms through which the content is shared. These laws oblige platforms within scope to build youth protection mechanisms, such as moderation systems, a means to flag non-compliant content and age gating – please see our separate article on Minors and the Metaverse for further details.
There are, however, proposals for similar regulation across the EU, which are likely to shake things up even more, and which stakeholders in the metaverse will need to monitor carefully.
The EU Digital Services Act
Following various consultations, the European Commission published in December 2020 its proposal for the Digital Services Act (DSA), an EU regulation with the aim of harmonising the Union's position on the obligations that should be placed on online intermediary services. The categories of service caught by the DSA are wide, capturing all types of online intermediary from internet access providers, to cloud and webhosting services, to online platforms such as social media services (although the obligations under the DSA vary depending on how a service is categorised).
A large focus area of the DSA is illegal content, with online intermediary services being obliged to work with national and administrative authorities to act against this type of content. Certain types of online platforms will need to implement moderation procedures, put in place adequate notice, take-down and redress mechanisms, provide clear transparency information and even proactively report to relevant law enforcement authorities any suspected criminal offences involving a threat to life or the safety of others. Additionally, very large online platforms with more than 45 million users must comply with some particularly onerous risk assessment and auditing duties.
The UK's Online Safety Bill
While the UK will not be subject to the DSA following Brexit, the topic of online safety is still a top priority for the government. Following a white paper, consultation and several government responses, the first draft of the Online Safety Bill was published in May 2021. While, much like the DSA, it aims to impose duties of care relating to illegal content, it extends further than the DSA by looking to impose on platforms responsibility for content that may not itself be illegal, but is of such a nature that there is a material risk that it could be harmful to children (if the service can be accessed by children) or, for the largest "Category 1" providers, harmful to adults.
Among other obligations, in-scope services will have to carry out risk assessments, put in place systems to identify and remove harmful content, set out in terms of service how users are to be protected from harm and implement effective user reporting and redress systems. The duties of care set out in the Bill will apply to online user-to-user services which allow content to be shared, as well as most search services, each with some limited exceptions and to the extent that the service in question is UK-facing.
The proposals and the metaverse
Both the DSA and the UK Online Safety Bill are likely to apply to numerous developers and businesses operating in a metaverse where user interaction and collaboration is essential. Importantly, those creating the ability for users to share content with each other should not just assume that, because the overall metaverse exists on an operating system, it would simply be the provider of that operating system that is primarily responsible for identifying, dealing with and reporting on illegal or harmful content.
Notwithstanding the complex set-up of the metaverse, it seems very likely that the providers of platforms that allow for user interaction within a part of the metaverse, and which act as intermediaries for the spread of user-generated content, will be within scope. The issue of how to keep end users safe is one that will affect numerous metaverse stakeholders. Given the potential for huge fines – the DSA proposes fines up to 6% of annual income/turnover while the UK's Bill proposes 10% of qualifying worldwide revenue – it is worth these stakeholders taking note.
Although it may be several years before the final provisions of each proposal actually take effect, the complexities of implementing systems that meet the relevant requirements are such that platforms contributing to the development of metaverses – and which allow user-to-user interactions – should start thinking about this now. Scoping exercises should be undertaken to determine which proposed laws may apply (which will depend on the location and demographic of the target audience). It may well be that there are certain “quick win” mechanisms that can be put in place in order to point towards compliance; in particular, ensuring that effective notice and takedown procedures have been implemented will be a sensible starting point and will help to empower users to play their own part in keeping the wider metaverse community safe.
However, these mechanisms would likely need to sit alongside the development of a broader strategy to implement appropriate and proportionate internal structures for how to identify and promptly remove harmful material beyond basic notice and takedown.
… moderation on a global scale in an open metaverse with boundless UGC content presents a wealth of challenges, even more hazardous than those that already exist on an unmoderated World Wide Web. It may not be enough to simply have community moderators; individuals will need to take a more active role than they do now in keeping spaces safe."
Devil in the detail
The devil will be in the detail, insofar as these structures and processes will need to comply with a long list of requirements that may seem relatively subjective, but which are likely to be elaborated on over time by regulator guidance and codes of conduct. But, equally, online platforms operating internationally will want to find a template for dealing with harmful content that ticks the boxes across numerous territories rather than having to implement numerous country-specific processes.
Striking the right balance will be tricky. The earlier that user platforms can start thinking about user safety as a core component when developing metaverse technologies and baking in appropriate safeguards from an early stage, the better placed they will be to flex to the specific requirements of these new laws as they reach their final manifestation.
Source: Newzoo Trend Report 2021 - Intro to the Metaverse report.