AI and the Changing Face of War: Implications for Civilians, Communicators, and Humanitarian Actors


Today's wars, such as the ongoing war in the Middle East, are now accompanied by artificial intelligence (AI) and shaped by algorithms. From the fighting between Hamas and Israel Defence Forces, to the broader regional tensions, artificial intelligence has quietly become big part of the battlefield.
Modern warfare uses AI to process vast amounts of surveillance data; satellite imagery, communications, drone footage, at speeds humans cannot match. Algorithms help identify potential targets, detect patterns, and even predict possible movements, increasing precision and reducing civilian harm – theoretically. In densely populated places, errors can have devastating consequences. When decisions are accelerated by machines, the window for human judgment narrows. At the start of the war, about 160 innocent Iranian school children were killed from an explosion.
AI is also transforming the information space. False and algorithm-driven narratives travel faster than verified facts. Deepfakes, AI-generated images, and artificial audio clips spread quickly on social media, fueling propaganda and misinformation. A single manipulated video, image or audio can heighten tensions across borders within minutes.
For civilians, AI-enabled warfare can mean less certainty about truth and accountability when harm occurs. It also means living in an environment of fear, saturated with uncertainty about what is real, fake or manipulated.
Humanitarian workers are challenged with operating in environments where AI-assisted strategies shorten response time and misinformation directly affects negotiations, rapid assessments, staff safety, and community trust.
In the age of AI-enabled strategies in conflict, credibility moves at machine speed. Strategic communication workers and institutions must stay ahead and respond quickly with evidence, ensuring they also provide clarifications about technologically complex terms and tools, and protect institutional credibility. Silence creates room for speculation and weakens trust.
The ongoing war is a test of how international humanitarian law (which was not written with machine learning models in mind) adapts to algorithmic warfare. While AI regulations still exist, e.g. the International Committee of the Red Cross (ICRC) emphasizes that AI must assist, not replace, human judgment to ensure accountability for violations), the speed and scale introduced by AI could strain oversight mechanisms. While talks are ongoing at the United Nations on regulating autonomous weapons, no consensus has been reached.
Sources:
 

Thanks for stopping by. Don't forget to leave a comment, it helps us serve you better.

Comments

Popular posts from this blog

#ChristmasDayBrunch @SheratonAbuja 25 December