Bletchley Declaration puts focus on AI safety

An agreement on the safe and beneficial use of artificial intelligence has been reached by 28 countries at a UK-hosted AI Safety Summit, though questions remain over its enforcement and reach.

Bletchley Park
The AI Safety Summit was held at Bletchley Park - Adobe Stock

The Bletchley Declaration saw many of the world’s global powers, including the US, China and EU, agreeing in principle that the safe use of AI is of fundamental concern. Other countries endorsing the Declaration include Brazil, France, India, Ireland, Japan, Kenya, Saudi Arabia, Nigeria and the United Arab Emirates.

“This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren,” said prime minister Rishi Sunak.

“Under the UK’s leadership, more than 25 countries at the AI Safety Summit have stated a shared responsibility to address AI risks and take forward vital international collaboration on frontier AI safety and research.”

The UK summit, hosted at Bletchley Park, was convened originally to focus on ‘frontier AI’: highly capable foundation models that could possess dangerous capabilities and present an existential threat to humanity. However, recent advances in AI – along with its already widespread use across many aspects of society - are seen by many as a more pressing threat. There have been calls for stronger regulatory direction from governments, particularly for the tech companies central to the direction of AI’s development.

Despite this, the Declaration dealt primarily with risks associated with frontier AI, with particular concern around cybersecurity, biotechnology and disinformation. According to the agreement, there is “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.” 

The more mundane, but already widespread, risks of AI also feature in the Declaration, however, with countries noting bias and privacy as two key issues. While the Declaration has been largely welcomed by the UK’s academic and scientific community, some have criticised its reluctance to take on the problems and issues that are already known to be caused by the use of AI today.

“International collaboration is needed on these grand challenges, so the fact that the declaration has a global reach, including China…is encouraging,” said Elena Simperl, Professor of Computer Science at King’s College London.

“More worrying is the continued emphasis on frontier models rather than the whole range of very powerful AI systems that have been put to use in the last 10 years, which are already doing real harms, documented in the media or in AI incidents databases. This is not just about a small set of technologies released very recently, it is about any form of data-driven technology that is not used responsibly.”

Doubts have also been raised over the scope and reach of the Declaration and it remains to be seen how any regulation would be enforced. 

“The creation of an AI safety institute could be a very positive step as it could play a major role in monitoring international developments and in helping to foster a broad discussion about the societal impacts of AI,” said Dr Marc de Kamps, Associate Professor at the University of Leeds’ School of Computing.

“However, a moratorium on risky AI will be impossible to enforce. No international consensus will emerge about how this should work. From a technological perspective it seems impossible to draw a boundary between ‘useful’ and ‘risky’ technology and experts will disagree on how to balance these. The government, rightly, has chosen to engage with the risks of AI, without being prescriptive about whether certain research is off limits.

“However, the communique is unspecific about the ways in which its goals will be achieved and is not explicit enough about the need for engagement with the public.”