Autonomous weapons: My Droneski Just Ate Your Ethics
La corsa verso le armi autonome mi pare alimentata dalla stessa retorica che guidò la corsa verso le armi nucleari. <http://warontherocks.com/2016/08/my-droneski-just-ate-your-ethics/> [...] The question of autonomous, lethal robots presents us with a sort of prisoner’s dilemma, in which uncertainty regarding other players’ decisions drives us to pursue a sub-optimal strategy in order to avoid the most damaging outcome. Consider a game with two combatants who have two possible strategies for using lethal robots in a contest: full autonomy and human control. (We categorize any human interventions in the robot’s decision-making as human control, even if the human’s role is limited to approving the release of a weapon.) Neither side seeks to create humanity’s future robot overlords, so full autonomy is an unappealing strategy given the risk (however small it may seem) that humans could lose control over fully autonomous weapons. However, maintaining human control exposes players to other disadvantages that may be decisive should the other player opt for full autonomy. First, fully autonomous systems could operate in a much faster decision cycle than human-in-the-loop systems. An inferior platform can defeat a superior one if the inferior platform can, to borrow a phrase from John Boyd, get inside its enemy’s decision loop. This places a manned, semi-autonomous future force at significant risk when encountering a fully autonomous first echelon or defensive screens of a less scrupulous enemy. Second, fully autonomous systems can operate in a compromised network environment. A lethal, autonomous drone would not require a reliable link to the robot. In contrast, telying on human-controlled systems, requires total confidence in the integrity of our communications during a conflict. These two assumptions — the superior decision speed of fully autonomous systems and the vulnerability of networks in future wars — are likely to drive players in the game to embrace full autonomy. They may prefer to risk creating Skynet than take the chance that they will be decisively defeated by a human adversary. This is the dilemma of fully autonomous, lethal robots. [...] Autocracies may therefore view automation as a way to extend state control and reduce the potential for internal opposition to regime policy. If the Russians and Chinese are already moving rapidly towards autonomous lethal systems, how will other nation-states and non-state actors approach the development of fully autonomous systems? Ultimately this quest for greater autonomy is resulting in a “drone race” in which we must maintain the technological and operational lead. We should temper our concerns about “killer robots” with the knowledge that our adversaries care about the morals and ethics of lethal, autonomous systems only insofar as those concerns give them a competitive advantage. If full autonomy gives them supremacy on the battlefield, they will care little about what human rights lawyers think, even if an international agreement to prohibit autonomous weapon systems emerges. [...]
participants (1)
-
Alberto Cammozzo