From its beginning in the fifties of the last century, Artificial Intelligence was heavily supported by “defence agencies” in order to make “better warfare”.
But, if an AI researchers assumes that her/his discipline really can deliver results—otherwise s/he would be a dishonest researcher—then why not try to use it to help decision-makers in government or concerned groups outside goverment who want to prevent the outbreak of war or want to end it?
Therefore, already in the eighties, the Austrian Research Institute for Artificial Intelligence (OFAI), often in cooperation with the then Department of Medical Cybernetics and Artificial Intelligence of the University of Vienna, tried to use AI methods first on a more conceptual base but then increasingly by using conflict-, crisis- and conflict management-databases to either find, by case-based reasoning methods, similar cases in order to see which conflict management methods were successful or by computing decision trees with machine learning methods to find the conflict management strategy with the greatest chance of success in a new crisis situation (for more information, please see Chapter 11 and its references).
“Even by the standards of war, some of the atrocities in eastern Congo are shocking. Zainabo Alfani, for example, was stopped by men in uniform on a road in Ituri last year. She and 13 other women were ordered to strip, to see if they had long vaginal lips, which the gunmen believed would have magical properties. The 13 others did not, and were killed on the spot. Zainabo did. The gunmen cut them off and then gangraped her. Then they cooked and ate her two daughters in front of her. They also ate chunks of Zainabo’s flesh. She escaped, but had contracted HIV. She told her story to the UN in February, and died in March.”
After reading this passage from a recent issue of The Economist, can one go back to “normal”? Not easily. And even if one thinks that the research on computer-aided methods for conflict resolution and prevention can only contribute a tiny bit to help preventing such horrible events, one has to work on that. The more so as there are already programs available which calculate the risk of losses for a potential aggressor, e.g. the Tactical, Numerical, Deterministic Model (TNDM), developed by the Dupuy Institute (http://www.dupuyinstitute.org/tndm.htm, last checked 23 Sept 2005); even though programs of this kind sometimes may encourage an intervention in an unjust war.
But “Programming for Peace” should not mean “peace at any price”. It even could mean “war” in order to establish “long-term peace”. The title invites misinterpretation. But, to take a historic example, (nearly) all Europeans wholeheartedly welcomed the decision of the USA to enter the war against Hitler’s Germany and its allies.