Data privacy, handling data and artificial intelligence
Artificial intelligence (AI), machine learning and algorithms will be central to the metaverse, organising and manipulating the unimaginably vast quantities of data which will underpin it. We see this already in some of the existing technologies that will be incorporated into the metaverse, such as virtual reality, internet search engines, biometric identification systems, and natural language processing.
Data is inextricably bound up with AI, as the raw material upon which AI systems are trained, and the fodder that feeds their day-to-day operations. Without data, there is no AI. Access to individual users' personal data will be crucial for personalisation of content, which will be a fundamentally important aspect of an immersive metaverse. Ownership of data or, more accurately, the rights to use and prevent use of data are, therefore, hugely important.
Crucial areas of law that already apply are intellectual property (IP), data protection, and contract law.
Protection for AI
IP protection is available for aspects of AI such as models and algorithms; software in which they embody; and model training and optimisation strategies. Copyright is a key IP right, which covers the particular form of expression of these items and associated documentation. It has the advantage of arising automatically (without requiring an application to the authorities) but only gives protection where the particular work in question has been accessed and copied.
Confidential information and trade secrets law also provides important protection that arises without the need to register – though the protection is, of course, limited by the extent to which information can be kept confidential in practice. Patents are of growing importance for AI, and there have already been many thousands of AI-related patent applications. Patents have the advantage of protecting the inventive concept of these developments (so are enforceable regardless of whether the third party has had access to or is even aware of the inventive development), but entail a formal application and examination by the authorities, so they can be difficult and costly to obtain, maintain and effectively enforce.
The legal status of data itself can be confusing, and it is often said that pure information cannot be "owned". In fact, although coverage is patchy, IP law can help to secure the value of data. Collections of data might well have database right protection and will often be protected under trade secret or confidential information laws.
Plan and prepare
With any project involving creating data or systems, the key is to set it up so as to optimise the position from the outset, making sure that there is a record of data or systems that are acquired or created, showing when it was created or acquired and by whom, as well as documenting and enforcing access restrictions and licences granted. Planning and preparation reduce the risk of being unable to use and exploit valuable IP assets fully.
Any data that concerns an identifiable individual must be used in accordance with data protection laws, such as the UK Data Protection Act and the General Data Protection Regulation, both UK and EU versions. Individuals must be given specified information about this processing, in a user-friendly form. Some especially sensitive categories of personal data (such as biometrics and health data) may, in practice, only be usable in the metaverse context with express consent. On the other hand, some types of data can be collected or processed in a way that avoids it being classed as personal, placing it outside the reach of the legislation (for example, anonymising data by aggregating it so that it ceases to be traceable to an individual).
Some especially thorny legal issues will arise where "black box" AI systems automatically make decisions based on their analysis of huge databases of individual users' interactions with the virtual environment. For example, an AI may act on types of personal data (for example, about a user's mental or physical characteristics) which it has not been fed, but has itself inferred. Will an AI system's operators be able to explain its data-processing activities to the degree required for data-protection compliance, when they might not fully understand how or why it is taking certain courses of action?
The role of contracts
Contractual arrangements are fundamental to maximising the value and utility of AI systems and data, especially given the uncertainties in the scope of IP protection, or the ways that it will develop in future. The right contractual frameworks can ensure that, to the extent that there are IP rights in data or AI systems, as far as possible, ownership is established and will not be contested – and the owner can control who else can use them. Or, if the IP rights are to be owned by someone else, the agreement can be designed to ensure that the necessary use rights are granted, including the ability to pass data, or rights to use systems, on to others if required. As well as protecting the IP position, contracts are also important for complying with data protection legislation.
Those who do not consider these issues at an early stage of AI projects will risk being saddled with contractual relationships that predate any detailed consideration of AI and the ways in which it might be used. For example, in relation to data, legacy contractual frameworks may be silent, inappropriately restrictive or deal only with personal data compliance.
More specific AI and data regulation is being considered in both the EU (the currently proposed AI Act and Data Governance Act) and the UK (the National Data Strategy and National AI Strategy), although the UK government's favoured approach appears to be to have AI subject to sector specific regulation rather than introducing a cross-cutting general AI statute.
The proposed EU AI Act (in whatever form it ends up) is likely to have a significant impact on the metaverse. It proposes different levels of regulation depending on the perceived risks posed by types of AI. Interestingly, some use of AI relevant to a metaverse may well be in the unacceptable or high-risk/high-regulation bracket if they amount to subliminal, manipulative or exploitative techniques that cause harm, or involve automated face recognition or other biometrics. Harm is, of course, envisaged as being "real world" harm, but that might include, for example, addictive and compulsive behaviour and other mental health issues already of concern in the gaming world.
Despite Brexit, EU AI and data law will have implications for metaverse developers, operators and users in the UK for several reasons: Firstly, UK-based operators may be directly subject to EU regulation via provisions with extra-territorial reach where EU citizens are affected. Also, the existence of EU AI and data regulation may well influence the development of UK law in this area. And, of course, it is often simply more practical to ensure that products comply with both EU and UK regulations in order to avoid the need for parallel versions of systems.