In the sixth in our blog series showcasing the DARE UK Sprint Exemplar Projects, Smarti Reel, Christian Cole and Emily Jefferson discuss recommendations for the safe use of artificial intelligence models in trusted research environments.

Artificial intelligence (AI) has the potential to transform processes, making them safer and more efficient for the benefit of society. However, the training and deployment of AI models requiring access to personal data has been limited and is in its relative infancy. For the best accuracy, AI models need access to the most relevant data – however, trained AI models carry a higher risk of encoding and re-identifying personal data than classical statistical analysis. Data controllers managing access to personal data need to be able to control these AI risks.

The GRAIMATTER project has developed a set of recommendations which will support trusted research environments (TREs) to develop the capability to securely support AI projects, forging the pathway to enable the training of AI models at scale on relevant data.

Trusted research environments

Trusted research environments (TREs) provide approved researchers with secure access to linked, de-identified data for research projects for public benefit. The environments are deemed trusted as they comply with the Five Safes, ensuring that no files nor data can be exported without first being checked by TRE staff, and ensuring no personal or potentially identifiable data can be removed or copied by researchers. This process is called disclosure control.

Into the world of machine learning

Most projects supported by TREs involve the statistical analysis of personal data across a range of sectors (for example, health, police, tax and education). TREs, therefore, have focused on the disclosure control process of aggregate, summary-level data such as a trend or a graph, as required for most research publications.

However, the emerging field of AI, with applications including spotting human errors, streamlining processes, task automation and decision support, has led to an increased desire from academia and industry to train AI models in TREs. These complex AI models require access to data to learn the most relevant patterns, increasing the possibility that personal data can be disclosed from such AI models. Releasing trained AI models from TREs introduces an additional risk for the disclosure of personal data, including special category data under data protection laws, such as racial or ethnic origin, genetic, biometric, or health data. To meet legal requirements for data protection and ethics standards on fairness, accountability and transparency, particular care is needed for the safe release of such models.

Risks of training AI models

The size and complexity of AI models present three significant challenges for the TRE’s traditional disclosure-checking process:

  • First, even models that are ‘simple’ in AI terms are usually too big for a person to view easily.
  • Second, our research shows that a person can’t say whether a model is disclosive simply by eyeballing it.
  • Third, and most significantly, AI models may be susceptible to external attacks using methods that can ‘trick’ the model into revealing information about the data used for training. These attacks can have greater potential to re-identify personal data than they would for conventional statistical outputs. This means that AI models trained on TRE data may be considered personal data(sets) and therefore fall under data protection laws.

TREs do not have mature processes and controls against these risks. This is a complex topic, and it is unreasonable to expect all TREs to be aware of all risks or that TRE researchers have addressed these risks in AI-specific training. The combination of, on the one hand, growing demand and potential benefits and, on the other, significant challenges presented by using AI, creates a need to develop disclosure-checking solutions specifically targeted at AI models.

The GRAIMATTER Project

GRAIMATTER stands for Guidelines and Resources for AI Model Access from TrusTEd Research environments. The GRAIMATTER team has developed a set of usable recommendations for TREs to guard against the additional risks when disclosing trained AI models from TREs. Our interdisciplinary team involved experts from technical, legal and ethical domains and those with experience in running health data TREs.

The project evaluated a range of tools and methods to support TREs in assessing output from AI methods for personal data. We also investigated legal and ethics implications and controls. The views of the public were sought through a series of five workshops and input from two lay co-leads within the core team.

Challenges

GRAIMATTER was a challenging project to deliver within eight months. It was highly interdisciplinary and developing a shared understanding across the team took time. Developing real-world examples for non-technical team members and public representatives aided this challenge.

Although there has been much research on the general area of the risks of personal data being encoded within trained AI models, very little research has been carried out previously in the context of TREs. Therefore, we needed to run many different experiments to be confident of our recommendations.

Recommendations

In a high-level summary, our extensive green paper recommends that:

  • new information is included in data governance and ethics applications so that the risks and controls associated with training AI models can be adequately assessed;
  • staff within TREs run attack simulations against the trained AI models to be released from the TRE, mimicking the steps of an external hacker;
  • researchers use different versions of AI training software which limit the settings that they can employ to reduce the chance of accidentally including personal-level data;
  • some models might not be safe to release and other controls should be considered for those models; and
  • new training courses are developed and attended by researchers, TRE staff and data governance/ethics boards.

We developed a range of software tools to support researchers and TREs, which can be found on Github.

Find out more about the GRAIMATTER project and access the final report and recommendations, including a public summary.