Two stage application process

Once you have checked that you meet our minimum PhD in Safe AI programme requirements, the application process for SAINTS is thereafter comprised of two stages;

  • Stage One: the initial application form, which will outline how you meet the academic requirements, and an Expression of Interest (see below)
  • Stage Two: a group selection event at the Institute for Safe Autonomy (or online for international applicants)

Applicants who are successful at Stage One will be invited to Stage Two. Applicants who are not successful at Stage One will be notified via email in the w/b 02 February 2026.

Stage Two – group selection event

Candidates progressing to Stage Two will be invited to attend a group selection event which will take place either in person on Tuesday 11 February 2026 or online (for international applicants) on Wednesday 12 February 2026. This group selection day will include a panel interview.  Full details will be provided nearer the time to candidates who are progressed.

Research statement [1000 words MAX] 

During the first year of your PhD, you will be supported in developing a research proposal. Therefore at this stage, we are not asking for a full research proposal. Instead, we would like you to outline your initial research interests and ideas and how these relate to the topic of AI Safety.

In the research statement you should:

  • Make it clear which of the SAINTS themes (see below ‘Research Themes’) your research interests align to (you do not need to address both)
  • Justify why this research will make a significant and original contribution to multidisciplinary AI safety research. (Where appropriate, references can be included to motivate and contextualise your research ideas).

We believe that AI safety will require multidisciplinary solutions. Your statement should therefore also:

  • Explain why the research you are interested in requires a multidisciplinary perspective, and how your research may benefit from engagement with industry.
  • Explain how your disciplinary/professional expertise will contribute to the SAINTS multidisciplinary research environment.

After candidates have been accepted into the SAINTS programme, we will identify and assign two or more supervisors who are aligned with your research interests. You therefore do not need to identify a supervisor as part of your application.

Assessment criteria 

Your EOI will be assessed against the following criteria. Make sure you address these in your statement(s).

Research:

  • We assess research by looking at the following areas:
    • Relevance: does the research outlined align with SAINTS and its research themes? 
    • Novelty: does the research outlined consider and demonstrate the originality of the work? 
    • Significance: does the research outlined address an area of significance and does it describe and demonstrate its potential for impact?
    • Feasibility: could the research outlined be crafted into a feasible and ambitious PhD project?

Collaboration

  • We will assess collaboration by looking at the following areas: 
    • Understanding: does the candidate understand and demonstrate the need for collaborative research through links to their experience and interests? 
    • Contribution: does the candidate describe and demonstrate how their skills and experience will benefit the SAINTS community?

Skills:

  • We will assess skills by looking at the following areas:
    • Are links made between the skills and experience of the candidate, the research outline and the aims of the SAINTS programme? Are there examples of the application of these skills provided?

Use Cases

SAINTS research is grounded in real world systems. The research themes outlined above could be explored in relation to a range of different use cases. Some examples are given below:

Self-Driving Vehicles: Ultimately, the aim is to produce self-driving vehicles that can go anywhere with no human intervention. In the interim, vehicles will have a level of autonomy but there will be a combination of human and AI-based control. Research in this context will include: demonstrating safety of AI-based functionality, in dynamic and uncertain contexts; safety of transitions of control between humans and AI, especially in emergency situations; the legal liability of drivers vs vehicle manufacturers; the ethics of placing responsibility on drivers to monitor systems (something that humans are poor at); ensuring safety and ethical acceptability as the contexts (including regulations and human perceptions of risk) change. 

Autonomous Ships and Underwater Robots: These include AI-enabled navigation systems for commercial ships and robots that explore ocean depths where humans can’t easily reach, for example to monitor infrastructure such as pipelines and offshore power generators such as wind turbines. Research in this area can explore how these robots can autonomously navigate, monitor and manage their own “health” (they may be on missions for months), collect data or samples without harming marine life or doing environmental damage. Other issues include operating in multiple legal jurisdictions, cultures and societies; the ability to manage and recover the robots following a failure (noting the rules of salvage); the ability to remotely monitor and control the robots. 

Surveillance Drones: Drones equipped with AI are used for surveillance in relatively open areas like borders, powerlines or forests as well as in urban environments which are much more densely populated. Both urban and rural environments are dynamic and have uncertainties, e.g., movement of trees and pylons in high winds, putting up cranes and scaffolding, etc. Safety research focuses on ensuring these drones can navigate autonomously without colliding with obstacles, collect data in privacy respecting ways, and make real-time decisions in response to changing environmental conditions. Other issues include weighing the benefits, e.g., identifying powerline problems to enable rapid repair, versus the risks, e.g., the impact with objects; emergency management including recovery of a drone and data it may have captured; risks of normalisation of mass surveillance or social rejection of operations, especially in urban environments. 

Defence and National Security: AI is increasingly integrated into military operations for decision support and surveillance. Research is crucial to ensure that AI tools enhance the safety and effectiveness of personnel without leading to unintended consequences. Issues include disaster relief, emergency planning, accessible rescue services, designing to ensure compliance with legal and ethical rules; achieving effective and responsible operation; achieving good situational awareness for operators; understanding liability in the event of accidents; and understanding the ability of operators to monitor more than one system at once. 

Manufacturing and Industrial Automation: In factories, AI systems and robots work alongside human workers. Safety research is crucial to prevent accidents and to ensure efficient and effective collaboration. Issues include establishing confidence in cobots motion planning; balancing the benefits of increased efficiency with the impact on jobs (loss thereof or reduction in quality/value); understanding the change in the nature of jobs on safety and operator mental health; liability in the event of accidents.

Healthcare and Medical Diagnosis: AI systems assist in disease diagnosis, analysing medical images and recommending treatment plans. Research in this area includes designing so as to maximise the effectiveness of the human-AI team, considering it as a joint cognitive system. Issues include establishing trust in interactions and rebuilding trust following adverse events; understanding and managing moral and legal responsibility, especially following an incident; understanding how to use explanations (of AI-based decisions or recommendations) to assure safety and preserve meaningful human control.

Mobile Menu