The second episode of The Silver LAIning: A SAINTS Podcast is now live!

From AI Literacy to Advocacy : Margaret Colling – The Silver lAIning: A SAINTS Podcast | Podcast on Spotify

The second episode of The Silver LAIning: A SAINTS Podcast is now live! We were pleased to have Margaret Colling, a former librarian and one of the 11 members of the UK public, chosen to form the “People’s Panel on AI” in 2023. Through a facilitated deliberation process, the panel put forward recommendations for the government, industry, academia, and civil society.

Since her participation, Margaret has actively advocated for the public’s role in addressing the potential ethical and societal risks of AI. Listen in to learn more about her newly found purpose, advancing the conversation on the safety of AI.

oplus_142606368

Launch of a brand new SAINTS Podcast

Last week, SAINTS and Prenika Anand (a second year PhD Researcher), launched The Silver LAIning: A SAINTS Podcast.  

Prenika is a SAINTS PhD Researcher exploring Psychological Safety of AI for Older Adults and is the creator and host behind the podcast.

On her motivation behind the series, Prenika says:

“I feel that at most academic and media forums, the intersection of ageing, AI and safety is under-discussed. One of my endeavors is to help bring this conversation to the forefront, even as I conduct my PhD research.”

As AI integrates into health and social care, the podcast dives into conversations with stakeholders to analyse the benefits and risks AI presents, particularly for older adults.

Prenika and Prof Willcocks sit infront of books, deep in conversation

In the inaugural episode, Prenika sat down with Professor Dianne Willcocks, a socio gerontologist who has dedicated more than four decades to advocating for older people’s wellbeing. They discuss the adaptation of older people to digital technologies in general and the current perspectives.

Supported by the UKRI AI Centre for Doctoral Training in Safe Artificial Intelligence Systems (SAINTS), this multidisciplinary podcast series is intended to create accessible briefings for anyone interested in the intersections of ageing, AI and safety.

Listen now

SAINTS at TAROS 2025

Between 20 and 22 August 2025, some of our SAINTS CDT postgraduate researchers attended the TAROS (Towards Autonomous Robotic Systems) conference, hosted at the University of York.

The UK-hosted international conference on Robotics and Autonomous Systems (RAS) aims to present and encourage discussion of the latest results and methods in autonomous robotics research and applications.

Prenika Anand stands next to her poster at TAROS Conference.

SAINTS’ Prenika Anand had the opportunity to present her work from her first year of studying a PhD at TAROS 2025. In a special session for Safety of Autonomous Systems chaired by Senior Research Fellow in Safety of Autonomy and AI, Philippa Ryan, she presented a talk and an academic poster on Psychological Safety of AI in Assisted Living. Prenika also won the D-RisQ Award for best poster on Safety for Autonomous Systems and Robotics.

Prenika said “To talk as a health professional to an auditorium full of roboticists was exciting! I equally enjoyed listening to talks by my PhD colleagues Shaun F., also a SAINTS PGR, and Nawshin Mannan Proma (Doctoral Researcher & Graduate Teaching Assistant, Institute for Safe Autonomy) and Rabia Karakaya (PhD researcher at the University of York, specializing in human robot interaction in autonomous mobile robots in public spaces.

The questions and supportive feedback from the audience, and winning a prize for the Best Poster in this category was just the motivation needed after months of synthesising evidence from literature.”

SAINTS host first Hackathon

The SAINTS postgraduate researchers of cohort 1, along with the SAINTS team (although, sadly, not everyone!) held their first Hackathon in Darlington for three days at the beginning of July. The Hackathon involved interdisciplinary teams made up of computer scientists, health scientists, law and philosophy PhD students, who were given the challenge of innovating contact-tracing for a pandemic scenario, while keeping in mind safety, feasibility, ethics, society, and technical aspects.

At the end of the three days, the final “Show and Tell” featured diversified approaches, with many of them off the beaten track. Indeed, with intellectual emulation came originality, with unique propositions including NHS Bluetooth headphones, an AI policy advisor, RFID-AI chips, smart AI air purifiers, and contact tracing applied to the management of prompt infection in multi-agent systems. All projects sparked more in-depth questions and debates, eventually paving the way for their refinement and for our collective advancement.

SAINTS PGRs and team at Raby Castle

The SAINTS CDT came together for three great and intensive days of work collaboration, but also of fun and team-bonding. Dr Richard Hawkins proved to be an invigorating Quiz master, testing us on general knowledge, history, geography, pop culture and music. Music questions that gave a disadvantage to all the teams facing Dr Colin Paterson… ! We also had the pleasure of taking a guided tour to the medieval Raby Castle, built in the 14th century, we witnessed its impressive range of art, textiles and furniture from across the globe, dating from the 17th to the 21st Century.

The whole experience of this first Hackathon demonstrated encouraging success for the eclectic enterprise at the heart of SAINTS. It showed promise for next events, and for the welcoming of Cohort 2 in September.

SAINTS Quarterly Workshop 3

SAINTS were excited to hold our 3rd quarterly workshop, which took place on 24 June and gave us the opportunity to get an update from our postgraduate students on their research. 

We also welcomed partners from Jaguar Land Rover (JLR), British Telecommunications (BT) and Medicines and Healthcare products Regulatory Authority (MHRA) who gave some insightful feedback to our PGRs and asked questions which helped our PGRs to think more deeply about their research and how this may impact practice. 

Our PGRs highlighted the increasing complexity of systems into which AI is being deployed in the real-world and associated safety concerns. Across the range of talks it became clear that whilst many of our PGRs are considering specific contexts, the concerns raised were common across domain boundaries. These included discussion around the nature of different types of harms, the role of explainability and the tools required to support this, as well the challenges of maintaining safety post-deployment.

Suemaiya (SAINTS PGR) giving a presentation about her research

It was great to see our PGRs delve deeper into the technical and societal issues which surround the use of AI, how we monitor its impact, and the steps we might take to make a difference in the deployment of AI for a safer world. 

What is Artificial Intelligence?

What is AI? – Dr Colin Paterson (SAINTS Training Co-Lead) explains…

For years, we have known the value of creating and using models. We create models for all kinds of things, predicting the weather, modelling airflows, patterns of movement in crowds, and in traffic.

In all cases, we think about how the real world works and write mathematical equations and rules which allow us to map a set of inputs to a set of outputs. For example, we might measure temperature and atmospheric pressure over a few days and use this to predict what the weather might be like tomorrow.

Dr Colin Paterson

This is fine when the relationships are well understood and the mathematics or rules we use in our models can be well defined. But it usually takes a great deal of expertise or domain knowledge to create models which are useful. Unfortunately, for some problems, the real world is too complex to understand at the level of detail needed to create a model which is useful. Indeed, even when creating a model is possible, the cost of creating it may be prohibitive.

So, wouldn’t it be great if we could just get the computer to work out what the model is for us?

Well, to some extent, this is what artificial intelligence is doing for us.

Rather than working out the mathematics and rules of a model, we can just provide data in the form of examples to show what the inputs might look like, and what the output should be in response. The computer then slowly changes the parameters of a mathematical model until the outputs look like what we would expect for the data we provided.

And this approach has been tremendously successful. We can predict house prices, the occurrence of cancer, or what the next word in a sentence might be. All we need is enough data and a model flexible enough to represent the problem we are interested in.

AI appropriately deployed can solve problems which are more complex than traditional methods and do so in a fraction of the time that a human would take. In a world where resources are limited and time is critical, then AI might well allow for solutions which would otherwise be impossible.

So what’s the problem?

Well, our old approach of model construction required us to engage with the problem, to deeply understand the nature of the models as well as the limitations and assumptions which underpinned the results produced by the model. By short-circuiting this process, we lose this deep understanding.

And when the models and the problems they are looking to solve are complex, it’s hard to know if the solution presented is right or just plausible. Indeed, if I show you 20 correct solutions in a row, you are going to start to believe that the model is always correct, but maybe all those 20 problems were easy and just like the data on which the model was trained. Problem 21 might be unusual and poorly represented by the training data.

Maybe this is OK when we are asking for fashion advice from a model, but less so when the output for the model is part of a larger safety-critical system.

To make matters worse, the world is dynamic and changes constantly unlike our training data, which captured a historic state of the world. Without a concerted and considered approach to mitigating the effects of such change, can we be sure that the systems we build continue to be safe after deployment?

The cost of AI may, therefore not be in pounds sterling, but in lost knowledge and increased risk leading to a loss of safety guarantees. Is that a cost we are willing to pay?

SAINTS Quarterly workshop 2

In our second workshop, SAINTS doctoral researchers presented their research progress, discussed the motivation behind their work and shared initial plans for carrying out their PhD projects. 

Colleagues from partner organisations (Jaguar Land Rover, HORIBA MIRA, NHS, NATS and DSTL) joined the workshop and shared valuable insights from their career journeys and current industrial and policy challenges.

Through presenting to an audience of peers, as well as supervisors and invited partners, this was a great opportunity for the PhD students to receive feedback from the wider SAINTS community, and to gain experience in preparing and delivering academic presentations to a diverse audience. It was helpful for the SAINTS team to get an overview of all the exciting projects that are being developed, while partners very much enjoyed the opportunity to engage more with the students about their research.

The workshop focused on two overarching topics: the safety of AI-enabled robotics and the safety of human-AI teaming. Exciting and emerging use cases and ideas ranged from AI-enabled clinical diagnosis and resilient communication between drones to questions of legal liability and moral responsibility for various types of AI systems, including foundational models.

The workshop also served as the Thesis Advisory Panel meeting, which is a necessary review of progress for a University of York research degree.

SAINTS CDT at the Safety Critical Systems Symposium SSS’25

This week, 4 to 6 February 2025, postgraduate researchers and some of the academic team from SAINTS attended the Safety Critical Systems Symposium SSS’25 which this year was held in York. The symposium featured submitted papers, keynote presentations, exhibition and poster sessions which explored and presented the latest developments in applied system safety.  SAINTS PGRs at SCSC

As part of the Symposium’s ‘5 minute pitch’, some of our SAINTS postgraduate researchers presented their work. Shaun Feakins spoke about ‘Safety Critical Training Data? Searching for Legal Obligations’; Suemaiya Zaman’s presentation was titled, ‘Balancing Safety and Innovation: Deployment Challenges for Drone Base Stations in Beyond 5G Communication’ and Prenika Anand won the top prize for her talk on ‘AI-led Triage Models for Skin Cancer’. Talking about her contribution she said:

“I spoke about AI-led Triage Models for Skin Cancer: System Safety Considerations for Diagnostic Reliability. It’s a coincidence that this conference coincided with World Cancer Day (Feb 4). Melanoma is the 5th most common in the UK and AI holds great promise in supporting decision-making tools for early diagnosis.”

SAINTS Director, Professor Ibrahim Habli gave an overview of our CDT and its focus on lifelong safety assurance of increasingly autonomous AI systems in dynamic and uncertain contexts.

While Professor John McDermid (SAINTS CDT Partnerships co-lead) was the Keynote and talked about AI safety and security.

Find out more about the Safety Critical Systems Symposium SSS’25.

The official launch of the SAINTS CDT

On Tuesday 7 January 2025, the University of York officially launched the UKRI AI Centre for Doctoral Training in Safe Artificial Intelligence Systems (SAINTS). We were delighted to welcome guests from across the SAINTS network, all stakeholders committed to the safety of AI systems. This exclusive event gave attendees the opportunity to meet key colleagues from industry, academia, and the regulatory community, alongside our first cohort of 11 postgraduate researchers, and to learn more about the future research challenges of safe AI.

Prof Ibrahim Habli - talking at the SAINTS CDT launch.

The event was structured around AI skills for the future, focusing on addressing the critical research and training challenges to both engineer the safety of AI, and deploy these systems responsibly. Prof Ibrahim Habli, Director of SAINTS, introduced the CDT, situating it within York’s impactful safety research and well-established training programmes sustained over four decades. 

Prof Chris Johnson, Chief Scientific Advisor for the UK Government’s Department for Science, Innovation & Technology, delivered the keynote speech, giving personal reflections on the value of the SAINTS multidisciplinary approach to realising the technology’s benefits and growth potential for society and the economy.

He reflected on his time at the University of York, both as a student and academic, and highlighted how York’s collaborative ethos shaped his inspirational career in both academia and government. 

Prof Sarah Thompson (Associate Pro-vice Chancellor Research) and Prof Paul Wakeling (Dean of York Graduate Research School), spoke warmly about the contribution that the SAINTS CDT is making to the University’s agenda for public good, and the best practices that the team has put in place to recruit diverse and multidisciplinary cohorts.

Two first-year SAINTS PhD researchers, Nina Seron and Ellaie McClean, introduced the two SAINTS research teams, each focusing on a linked team challenge: Safe Human-AI Collaboration; and Safety of AI-Enabled Robotics

Watch the talks

Two PGRs chatting to a demonstrator at SAINTS CDT launch.

The launch also gave attendees the chance to see the home of SAINTS, the Institute for Safe Autonomy (ISA), and discover demonstrations from various researchers in the world of AI. These included:

Reflecting on the launch, Prof Ibrahim Habli (Director of the SAINTS CDT) said: “SAINTS, with its focus on AI safety, reflects our deep commitment to the public good. The launch celebrated York’s outstanding multidisciplinary research and doctoral training with students and partners.”

SAINTS Director comments on UK Government’s AI Opportunities Action Plan

This week, the Prime Minister announced the UK government’s plans to launch Artificial Intelligence across the UK ‘to deliver a decade of national renewal, under a new plan’ (gov.uk, 2025). 

This announcement was welcomed by many in the AI space, who believe it will enable the UK to retain its place as a leader in AI and benefit everyday people.

Professor Ibrahim Habli, SAINTS CDT Director and Professor in the Department of Computer Science, University of York, said:

“The shift from an unhealthy fixation on existential risk to prioritising the safety of real AI products and services is a welcome move in the UK Government’s AI Opportunities Action Plan.”

“The UKRI AI Centre for Doctoral Training in Safe Artificial Intelligence Systems (SAINTS) is committed to training future AI leaders with the cross-disciplinary expertise and practical skills needed to ensure AI benefits are realised without causing harm to people and the environment. This is not just a theoretical exercise but a collaborative journey with our industrial partners like Jaguar Land Rover and NATS, public sector bodies like the NHS, and with fantastic support from SMEs such as Ufonia and charities like the Lloyd’s Register Foundation.”

“Our next stage at SAINTS is to make our exhaustive training programme in AI engineering, safety, and its legal, ethical, and social dimensions more widely available, and to serve our institutional mission as a university for public good.”

SAINTS CDT Director defines AI safety on The Turing Podcast

Listen to our CDT Director, Professor Ibrahim Habli, define and contextualise AI safety as he chats to  Ed and David, hosts of The Turing Podcast. A podcast from The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence.

The conversation covers the topic of defining and contextualising AI safety and risk, given existence of existing safety practices from other industries. Ibrahim has collaborated with The Alan Turing Institute on the “Trustworthy and Ethical Assurance platform”, or “TEA” for short, an open-source tool for developing and communicating structured assurance arguments to show how data science and AI tech adheres to ethical principles.

Listen now

Mobile Menu