27.02.2026
Reflections post the AI Impact Summit
Prenika Anand, SAINTS second year researcher, reflects on her experience at the AI Impact Summit in Delhi, India in February 2026.

My PhD journey began with writing my proposal post the Bletchley Declaration in the UK, in many ways the beginning of key policy discussions on AI safety and caution. Three years later, I was privileged to be in India for the AI Impact Summit in February 2026 as a UKRI SAINTS CDT researcher. A succinct summary of walking through miles of exhibitions and attending 50+ pre summit and summit sessions at different venues will take me longer, but am hereby attempting to document my take on the key engagements I had the opportunity to attend. As one may expect, I point out some wins and some opportunities.
The summit did well on what it was already expected to; the scale. The visible grandeur of the event in terms of the size of the venue, the number of summit and fringe event panels each day (200+ over a week), the number of pavilions (13 country pavilions and 300+ tech players I might have witnessed overall) and the sheer number of options thereof the academic, industry and policy sessions was unprecedented if I compare it to the few AI summits I attended previously in the UK and at Geneva. It definitely was not without navigating long queues for security checks and road traffic congestions through the week! Along with the scale of the summit, it was great to see the launch of the Global AI Impact Commons, a collaborative platform designed to help discover, replicate and scale high-impact AI solutions across countries and sectors and attend panel discussions on open source AI.
Whilst I commend the breadth of discussions on scale and adoption of AI benefits, they significantly outnumbered the discussions and panels on AI harms, safety and socio technical risks. Yet to witness a declaration that advocates for safety as a non-negotiable (binding and not merely voluntary) principle, to ensure that the “Impact” we recognise also accompanies a proportionate consideration of systemic harms. Yet I recommend listening online to a great sociotechnical panel I attended, speakers including Professor Dame Wendy Hall; ” From Technical Safety to Societal Impact: Rethinking AI Governance” and recent relevant publications cited/launched during the summit, including The OECD.AI Index; the AI Incident Reporting Framework for India; and the RAND Global Risk Index for AI-enabled Biological Tools.
The Expo (exhibition) at the summit was organized into 10 thematic pavilions featuring over 300 exhibitors from 30+ countries including the largest ones hosted by the big tech. Most of the exhibition was structured around the key areas of AI transformation; viz. Social Good, Human Capital, Inclusion, Safe & Trusted AI, Science, Resilience/Innovation, and Democratizing Resources. It was exciting to visit the UK Pavilion that also celebrated the UKRI initiatives in AI including the CDTs. From the India AI stack it was exciting to learn about our sovereign models including Sarvam and Vachana, a multilingual speech-to-text model for 12+ Indian languages. Similarly, there were expert demonstrations to describe the developments in the indigenous compute layer and the data stack (AIKosh). I also found the social impact initiatives by Wadhwani AI and kiosks on AI mediated assisted housing in urban India to be relevant for my research.
On February 18, I had the privilege of presenting my research at the Participatory AI Research & Practice Symposium (PAIRS). The symposium provided a dedicated space for deeper dialogue and networking among researchers committed to community involvement in AI. My research which focuses on AI safety, psychological harms, and the ageing demographic benefited from the feedback on my academic poster and cross-disciplinary discussions.
It was an invaluable forum for discovering synergies with international researchers. I’m particularly excited about this new research community I have joined in, which will be a great feedback loop for my work. I was also honoured to finally meet Dr. Susan Oman, whose work with the People’s Panel on AI, and Margaret Colling have been a significant contribution to The Silver LAIning A SAINTS podcast.
Lastly, I had the privilege to attend The UK AI Research Showcase and Reception hosted by the British High Commission in India at the High Commissioner Ms. Lindy Cameron CB OBE’s Residence. It was an invaluable opportunity to hear firsthand from a high-profile delegation, including Deputy Prime Minister David Lammy, Minister for AI and Online Safety Kanishka Narayan and former PM Rishi Sunak. The discussions offered insights into the UK’s strategy on AI, the UK’s leadership in AI safety initiatives, and the growing landscape of joint investments and strategic collaborations between India and the UK. At the same time we had announcements on the UKRI AI Research and Innovation Strategic Framework.
I also had the opportunity to attend the keynote by Amanda Brock, CEO of OpenUK, and speak with her personally about my research; and her feedback was incredibly kind and encouraging. Beyond the panels, I thoroughly enjoyed the UK AI Talks, where I pleasantly reconnected with my former tutor at Oxford, Andrew Soltan.
To conclude, the summit was an opportunity to proudly represent both my national and professional affiliations. Beyond the assurance of being at home, the summit provided me with an intense week of informing my view, meeting a community of researchers, understanding industry and policy narratives and reflecting on how the same are going to shape safety research. An international and a proportionate focus on safety during the forthcoming summits will allow us to expand the space and investment for global research in safety methodologies, a legitimate ask. This sentiment of mine, and as much of other researchers I met, concerns itself with the overall pro adoption, pro scale, yet not so pro-safety tone of the final declaration and the AI investment that the summit achieved.