SAINTS research focuses on the lifelong safety assurance of increasingly autonomous AI systems in dynamic and uncertain contexts, building on methodologies and concepts in disciplines spanning computer science, philosophy, law, sociology and health sciences. Our research addresses the following two overarching research themes:

  • Lifelong safety of AI systems: Weaving in safety throughout the entire lifespan of AI systems, from early design decisions to post-deployment.
    • Topics of interest: Risk-driven AI training, metrics and benchmarks; testing and simulation for uncertain operating environments; safe retraining and continual learning; proactive monitoring and dynamic safety cases; moral responsibility; legal liability; investigating AI-related incidents. 
  • Safety of increasingly autonomous AI systems: Understanding and addressing the implications of transferring decisions from humans to AI-enabled systems. 
    • Topics of interest: Understanding human-AI interaction; designing safe joint cognitive systems; assurance of safe transition between human and AI control; achieving effective human oversight and human-centred explainability; preserving human autonomy and accountability; reducing risk of diluting human value by human-AI teaming. 

Our research considers both purpose specific AI, e.g. convolutional neural networks for detecting pedestrians in autonomous driving, and general-purpose foundational AI, e.g. large language models for triaging patients in healthcare.SAINTS research focuses on risk reduction in sectors where safety is paramount, such as healthcare, national security, and transport, whilst taking a broader view of safety in its technical, legal, ethical and societal context. The broader perspective provided by the centre’s multidisciplinary structure is vital to the trustworthy and responsible development and adoption of AI, balancing technical safety with respect for other values, such as equity and explainability, and aligning research with legal requirements and societal needs and expectations.

1. Safety of AI-enabled robotics

This grand challenge relates to physically embodied autonomous AI agents, including agents such as self-driving cars, surveillance drones and deep-sea exploration robots. For such systems, AI typically has two roles. First, building a model of the operational context, identifying and assessing objects, e.g., distinguishing static elements such as the sea bed from dynamic ones such as fish, and predicting the movement of the dynamic elements. Second, planning or decision-making, e.g., deciding on a route to meet the system’s goals whilst ensuring safety, including avoiding collisions with other objects. Possible research areas include ensuring robots or drones can navigate within different environments and make real time decisions, demonstrating the safety of AI-based functionality, legal liability of drivers vs vehicle manufacturers, risks around the normalisation of mass surveillance etc.

2. Safe Human-AI collaboration 

This grand challenge will explore how humans and AI-based systems work together towards overall goals in domains such as air traffic services, healthcare, criminal justice, law enforcement and manufacturing. Humans may collaborate with AI-enabled embodied systems, such as robots or vehicles, and/or with stand-alone AI-based decision-support systems, for example, that recommend drug dosages. Possible areas of research include supporting effective human-AI teaming, use of explanations as part of assuring safety, considering responsibility in the case of incidents or accidents, understanding the change in the nature of jobs on safety and operator mental health etc.

Mobile Menu