Skip to content
Safety of AI
- Habli, I., 2025, April. On the Meaning of AI Safety. In 2025 20th European Dependable Computing Conference Companion Proceedings (EDCC-C) (pp. 185-188). IEEE.
- Habli, I., Hawkins, R., Paterson, C., Ryan, P., Jia, Y., Sujan, M. and McDermid, J., 2025. The BIG argument for AI safety cases. arXiv preprint arXiv:2503.11705.
- Burton, S., Habli, I., Lawton, T., McDermid, J., Morgan, P., & Porter, Z. (2020). Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective. Artificial Intelligence, 279, 103201.
- Bengio, Y., Mindermann, S., Privitera, D., Besiroglu, T., Bommasani, R., Casper, S., Choi, Y., Fox, P., Garfinkel, B., Goldfarb, D. and Heidari, H., 2025. International AI safety report. arXiv preprint arXiv:2501.17805.
- Porter, Z., Calinescu, R., Lim, E., Hodge, V., Ryan, P., Burton, S., Habli, I., Lawton, T., McDermid, J., Molloy, J. and Monkhouse, H., et al, 2025. INSYTE: a classification framework for traditional to agentic AI systems. ACM Transactions on Autonomous and Adaptive Systems, 20(3), pp.1-39.