News
August 2, 2022

The GRAIMatter Sprint Exemplar Project team want to hear your views

The team has developed a draft set of usable recommendations for trusted research environments (TREs) to guard against the additional risks when disclosing trained artificial intelligence (AI) models from TREs.

Please note this call for views has now closed.

The GRAIMatter Sprint Exemplar Project team are seeking feedback from the community on their ‘Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)’ as part of an open consultation period, ending on Monday 15 August 2022. They would like to hear from those who run TREs, researchers, data governance teams, ethics and legal teams, artificial intelligence/machine learning experts and data privacy experts. The document can be accessed via Zenodo.

Any feedback can be provided to Smarti Reel (sreel@dundee.ac.uk) and Emily Jefferson (erjefferson@dundee.ac.uk) via email by Monday 15 August 2022.

Background

Trusted research environments (TREs) are widely and increasingly used to support statistical analysis of sensitive data across a range of sectors including health, policing, tax and education. They enable secure and transparent research whilst protecting data confidentiality.

There is an increasing desire from academia and industry to train artificial intelligence (AI) models in TREs. The field of AI is developing quickly with applications, including capabilities for spotting human errors, streamlining processes, task automation and decision support. These complex AI models require more information to describe and reproduce, increasing the possibility that personal data could be inferred. TREs do not currently have mature processes and controls against these risks. This is a complex topic, and it is unreasonable to expect all TREs to be aware of all risks, or that TRE researchers have addressed these risks in AI-specific training.

Access the draft document