AI could support humanitarian organisations dealing with armed conflict or crisis

AI could provide humanitarian organisations with crucial insights to better monitor and anticipate risk of conflict or crisis, but users should understand the potential risks, a new study warns.

AdobeStock

Humanitarian organisations have been increasingly using digital technologies, with the Covid-19 pandemic accelerating this trend.

AI-supported disaster mapping has been used in Mozambique to speed up emergency response, and AI systems were rolled out by the World Bank across twenty-one countries to predict food crises.

While the new study claims that AI technologies have the potential to further expand the toolkit of humanitarian missions in their preparedness, response, and recovery, it also warns that some uses of AI may expose people to additional harms and present significant risks for the protection of their rights.

The study, published in the Research Handbook on Warfare and Artificial Intelligence, was conducted by Ana Beduschi, from Exeter University’s Law School.

“Safeguards must be put in place to ensure that AI systems used to support the work of humanitarians are not transformed into tools of exclusion of populations in need of assistance. Safeguards concerning the respect and protection of data privacy should also be put in place,” Professor Beduschi said in a statement.

Register now to continue reading

Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.  

Benefits of registering

  • In-depth insights and coverage of key emerging trends

  • Unrestricted access to special reports throughout the year

  • Daily technology news delivered straight to your inbox