Sprint Exemplar Projects

GRAIMATTER: Guidelines and Resources for Artificial Intelligence Model Access from Trusted Research Environments

Trusted research environments (TREs) provide a secure location for researchers to analyse data for projects in the public interest – for example, to provide information to the Scientific Advisory Group for Emergencies (SAGE) to fight the COVID-19 pandemic. TRE staff check outputs to prevent disclosure of individuals’ confidential data.

TREs have historically supported only traditional statistical data analysis, and there is an increasing need to also facilitate the training of artificial intelligence (AI) models. AI models have many valuable applications, such as spotting human errors, streamlining processes, helping with repetitive tasks and supporting clinical decision making. The trained models then need to be exported from TREs for use.

The size and complexity of AI models presents significant challenges for the output checking process. Models may be susceptible to external hacking: complicated methods to reverse engineer the learning process to find out about the data used for training, with more potential to lead to re-identification than conventional statistical methods.

With input from public representatives, GRAIMATTER assessed a range of tools and methods to support TREs to assess output from AI methods for potentially identifiable information, investigated the legal and ethical implications and controls, and produced a set of guidelines and recommendations to support all TREs with export controls of AI algorithms.

Principal investigator: Professor Emily Jefferson, University of Dundee

Funded amount: £315,488

Join the DARE UK mailing list to stay informed about project updates

If you’re interested in learning more about our work, how it can benefit you, or how to get involved, click the button to get in touch with us using our contact form.