suggested by F. Lenci, whom I thank for the notice.
Diego
PS: I’ve browsed the Ex. Sum. & Recom. of the report; I’haven’t read the full report yet, but, from
the Ex. Sum. & Recom it seems to me that, once again, the issue of AI (and ICT) dependability
as a general but fundamental feature of the specific technology at hand is not addressed, while the "advances in machine learning
and Articificial Intelligence (AI) [seem to be taken for granted as representing a] turning point in the
use of automation in warfare”. In addition, as far as I could see, no refernce to the ethical dimension of the introduction of
(lethal) autonomous weapons and, in general, AI tech in the battlefield is made.
Shouldn’t computer scientists, and in particular those expert in computer ethics, dependability, trustworthiness,
correctness, ecc. be more effective and active in this discussion?
Dott. Diego Latella - Senior Researcher CNR-ISTI, Via Moruzzi 1, 56124 Pisa, Italy (http:
www.isti.cnr.it)
===================
The quest for a war-free world has a basic purpose: survival. But if in the process we learn how to achieve it by love rather than by fear, by kindness rather than compulsion; if in the process we learn how to combine the essential with the enjoyable, the expedient with the benevolent, the practical with the beautiful, this will be an extra incentive to embark on this great task.
Above all, remember your humanity.
-- Sir Joseph Rotblat