
Developed in partnership with the AAIP, Alan Turing Institute, and the Responsible Technology Adoption Unit, Department for Science Innovation and Technology, the Trustworthy and Ethical Assurance of Digital Health and Healthcare (TEA) is an open source platform which supports the process of developing and communicating structured assurance arguments, showing how machine learning or AI systems meet ethical principles and best practices.
The TEA platform enables teams to develop assurance cases, including those used to justify claims about the fairness of digital healthcare systems. This, in turn, can help foster community engagement and sustainable practices within teams, build trust and transparency among stakeholders and users, and also support ongoing regulation and policy in the healthcare domain.
The diverse team, with backgrounds in philosophy, computer science, safety engineering and policy ran workshops over the course of several months with regulators and policymakers. Representatives from the Care Quality Commission, the Equality and Human Rights Commission, the Information Commissioner’s Office, NHS England, the Department for Science Innovation and Technology, and the Law Society contributed to the development of the platform and the subsequent report for policymakers.
This collaborative and multidisciplinary approach is particularly important for AI-based and autonomous systems because they are deployed within complex and open environments like healthcare. The whole systems approach – incorporating the technology and the wider socio-technical system within which it is deployed – underpins the school of thought at the AAIP and has been at the heart of its work in safe autonomy for the last six years.
The TEA platform has been used as a case study by the UK Government and it is hoped the tool will be used by developers to help support the process of developing an assurance case to substantiate claims about a given system.
The next phase of this project will focus on the Trustworthy and Ethical Assurance of Digital Twins (TEA-DT) funded by an award from the UKRI’s Arts and Humanities Research Council to the AAIP (now the Centre for Assuring Autonomy), and the Alan Turing Institute, as part of the BRAID programme. This will allow the research and development of the platform to continue with a focus on the specific area of digital twins.
Useful links:
Download the report ‘Trustworthy and Ethical Assurance of Digital Health and Healthcare’ (pdf)
The TEA case study on DSIT’s portfolio of AI assurance techniques:
Access the documentation and open-source tool on the GitHub repository:
link