In the third in our blog series showcasing the DARE UK Sprint Exemplar Projects, Professor Michael Boniface and Dr Laura Carmichael from the University of Southampton discuss learnings from the Privacy Risk Assessment Methodology (PRiAM) project.
Motivation
“The failure to adequately address privacy risks may damage trust and limit the realisation of the
benefits that can be delivered by data-enabled technologies” — The Royal Society
Organisations making decisions about sharing or providing access to sensitive data must weigh up the benefits of proposed research against the possible risks involved. There’s much to consider, such as evaluating public benefits and ensuring that appropriate protections are in place so that individuals, groups and communities of people, and wider society are not put at risk of undue harm. It’s clear that privacy risk assessment has an essential function in such decision-making processes. Yet, in practice, current approaches for assessing privacy risk are often ad hoc, manual, opaque, and inconsistent across different organisations — or even between different individuals within the same organisation.
Researchers are increasingly using advanced methods of analysis (artificial intelligence/machine learning) to discover value in big datasets from diverse sources. There is a pressing need to explore ways to standardise privacy risk assessment for research involving cross-domain access and linkage of sensitive data occurring between multiple organisations.
Our aim
Against this backdrop, the DARE UK Privacy Risk Assessment Methodology (PRiAM) project team set out to explore how privacy risk assessment methods and tools can be used to support organisations in making consistent and transparent data access, sharing, and linkage decisions.
What we did
(1) We spoke with domain experts
A central focus for Phase 1 of the DARE UK programme is about “exploring stakeholder experiences and challenges from existing infrastructure”. We wanted to gain further understanding about the types of risk factors, controls, and decisions that are fundamental for privacy risk assessment in practice. To do this, we set up an Advisory Board of 21 domain experts, including information governance practitioners, practitioners running or developing secure research facilities, legal professionals, oversight bodies and academic experts, with whom we carried out semi-structured interviews (see our D2 Deliverable Report for more information).
(2) We engaged with the public
To help us consider how privacy risk assessment should be meaningfully communicated to the general public, we sought to find out more about people’s privacy concerns, as well as beliefs regarding their ability to manage their privacy. To achieve this, we established a Privacy Risk Assessment Forum, involving four virtual workshops with a group of 10 individuals who had a particular interest in privacy risk assessment and use of health data for research. Through these workshops, participants helped us to develop a privacy attitudes questionnaire, which was distributed to the general public and received 500 responses (see our D4 Deliverable Report for more information).
(3) We developed a prototype privacy risk assessment framework
To help data custodians improve the transparency and consistency of their data sharing decisions, we developed a prototype privacy risk assessment framework built on the well-known and popular Five Safes approach. The main idea is to enable data custodians to explicitly list the criteria they consider for assessing privacy risk, thereby enhancing transparency. These criteria are then used to categorise different data sharing scenarios into discrete tiers of risk that can further be tied to decisions around data sharing, therefore providing consistency in decision-making (see our D2 Deliverable Report for more information).
(4) We devised a proof-in-principle for an automated risk modelling approach
To support data governance practitioners to explore threats, risks, and consequences in a transparent, repeatable, and efficient way, we devised a proof-in-principle for an automated privacy risk assessment approach using a specific tool: a System Security Modelling (SSM) platform that follows the ISO 27005 risk assessment process. To do this, we augmented a pre-existing SSM cybersecurity knowledgebase with privacy risk factors. Put simply, a data governance practitioner could use an SSM platform to: build a system model for a particular research scenario, find threats and their consequences, calculate risks, select controls, and output results (see our D3 Deliverable Report for more information).
(5) We outlined real-world examples
One of the aims of the DARE UK Sprint programme has been to “uncover and test early thinking around use cases” in delivering a coordinated national data research infrastructure. Our research efforts were therefore driven by three use cases related to public health and integrated care: two research projects in the areas of complex hospital discharge and multi-morbidity prevention, and a sub-national federated TRE ecosystem pilot (see our D1 Deliverable Report for more information).
Three lessons we learnt from the project
(1) Continue to build community expertise in analysis of risk factors and curation of privacy knowledge in both human and machine-readable formats
New and innovative infrastructure for cross-domain access and re-usage of sensitive data must embrace privacy-by-design and privacy-by-default principles. Such community expertise and shared knowledge are vital to support data governance practitioners to address a wide range of existing and emerging privacy risk factors, including the implementation of privacy controls such as privacy-enhancing technologies (PETs). This allows for the development of transparent, repeatable and automated privacy risk assessment processes.
(2) Standardised privacy risk assessment approaches should support organisations to understand and resolve their differences, and promote collaboration
By no means can standardised privacy risk assessment address all challenges facing cross-domain access and linkage of sensitive data alone – for example, interoperability concerns such as those related to infrastructure maturity and risk tolerance. However, what it can do is support conversations between stakeholders to build safe collaborations for linking and sharing data.
(3) Meaningful public engagement with privacy risk assessment remains essential, and careful consideration must be given to how such engagement can be best encouraged and supported
For instance, to motivate such engagement, organisations need to ensure that participants do not feel overwhelmed (in terms of effort required), but instead get a sense of belonging with other users and with the service provider in their management of privacy.
About the project team
Professor Michael Boniface, University of Southampton, led the research project, which involved three partner organisations including the University of Southampton, the University of Warwick and Privitar Ltd.
Find out more about PRiAM and access the final project reports.