Friday, October 10, 2008

From a Semi to a Fully Automatic Border

[Image: Security, surveillance and 'Super Sangars', MOD Defence News, 2008.]

So, what do you get when the British Army messes around with a few ISO shipping containers? Well, stack three, outfit the top one with bulletproof windows, then, rig a couple of daylight and thermal imaging cameras and a remotely controlled weapon station (RWS) that allows for some heavy dude machine guns to be fired by joystick from the inside, fence it off, and you got yourself one of these – the ‘Super Sangar’, a deadly new tower currently being used to protect forward operating bases in Iraq and Afghanistan.
Not exactly the kinds of “robotic snipers” we’ve spotted earlier creeping up in the NK DMZ, or along the IDF outposts in the West Bank, but perhaps part of the interim as military superpower continues its move more and more towards dependency on robotics and full scale automation. While the Super Sangar isn’t unmanned, it turns the turret into a kind of armored arcade made for war, at which point, perhaps (as we’ve dared speculate before), anyone with some badass joystick skills might be in there controlling this thing. I mean, I certainly don’t want to undermine the Army and the intense training that more than likely goes into manning such a tower, but it’s that bizarre architectural merger between gaming, robots, and warfare that continues to catch our attention, and this looming image of a war one day that could be fought by superpower from inside similar types of mobile military gaming cubicles.

[Image: Security, surveillance and 'Super Sangars', MOD Defence News, 2008.]

It's a crazy thought to be sure, that the Army might turn into a full fledged gaming industry in some future shifting its recruiting schema to professional gamers to man their evolutionary Sangar playstations instead of soldiers. I mean, the Army is already turning over control of some of their lesser sophisticated drones to their teen-aged recruits due to the minimal degree of aviation training that's apparently required to navigate them. I mean, every kid at some point flew model air planes with remote control, and now certainly a lage majority of today's youth have logged at least some time with the joystick. But, we’ve covered this silly scenario before here, so I won’t waste your time any further with that.

[Image: Security, surveillance and 'Super Sangars', MOD Defence News, 2008.]

However, over at CTLab Charli Carpenter offered a brief examination of roboticization in the context of asymmetric warfare and how it has in some ways been superpower’s response “to the types of humanitarian law violations commonly employed by a weaker enemy,” as if to say – well, if the enemy isn’t going to fight fair and according to the rules of engagement, then perhaps the military can develop a means of warfare that operates outside those same rules a bit as well. The ethical pursuit of “intelligent” robots in place of human controllers on the battlefield is of course highly contentious, as it should be. Can technology really be relied upon to make the right judgment as to what is and is not a fair target, to start, and then to actually pull the trigger? Then, to quote Carpenter: how will “responsibility for mistakes” be “allocated and punished,” and is “the ability to wage war without risking soldiers’ lives” a disincentive to resolve conflict peacefully?
Further, I would add, where does the military get off weighing the value of any one human life as being worth more than another? In other words, is saving the lives of soldiers worth the cost of potentially one innocent life fallen at the hands of a robot? The psychological superiority complex of a military that is willing to place the value of their own lives over others seems frighteningly absent from reflection here. Further, should the “enemy’s” less lawful warplay be allowed to substantiate justification for bending the rules of engagement? Do two wrongs make a right? I certainly don’t think so.
In light of how critics have responded to roboticization with these fears, Carpenter raised another point about how the collateral nightmare of nuclear weapons has in effect served as a deterrent for nuclear war, and therefore challenges us to consider if nuclear weapons on some level aren’t justifiable for having produced – to some extent – a neutralization of nuclear war. Could this same condition apply to battlefield robots, she asks. Could the threat of their imperfection and mistaken fatality convince the enemy not to engage in battle, therefore giving the battle bots some sort of positive role as a deterrent device? In her own words, “assuming valid ethical concerns over whether a particular category of weapons meets legal standards of discrimination and proportionality, to what extent should concerns over the likely political outcome of not developing them drive ethical discussions over whether they should be developed?”
I personally feel the deterrence value of nuclear weapons is not a useful comparison to help determine the value of warbots. Has the nuclear weapon truly paid off, brought peace, etc.? So far, it’s produced a volatile and precarious relationship between India and Pakistan, and ratcheted up the race for every other country now to have one, as their only chief source of security. Not to mention this paranoia now of them falling into the wrong hands. It leaves a background taste of immanent threat in the mouth of military conflict. We may have skirted nuclear disaster for now, but will we be able to do so forever just by lording them over each other’s heads? Especially, if and when other countries get them? What about the prospect of the “enemy” one day obtaining the same or similar battlefield robots of their own, where will that put future conflict, its “cleanliness”, or its chaotic potential?
I’m not at all suggesting that robots don’t have a useful role in a military capacity, clearly they do, but giving them the power to pull the trigger on their own is venturing into dangerous and irresponsible territory, if you ask me, that may open the door to a whole new world of mechanized logic and justifications for killing, and for enacting targeted assassinations – or, a “thanatotactics” (‘death tactics’) as Eyal Weizman has coined this type of logic as it is practiced by the IDF in Palestine.
Of course there is the sloppy collateral nature of battlebots to contend with, but I think as big a fear as that is the other side of the coin which is perhaps the extreme precision and capability they may bring to carry out executions from places and situations where soldiers may not be able to go themselves. And when that sort ability to attack and kill becomes more limber and precise, you have to be afraid of the type of serial killing and logic and ease for justifying targeted assassinations that could also emerge. The move to explore such literal killing machines places the trigger much too far out of the touch of the military’s humanity, if I can call it such. And when people don’t have to do the killing themselves it becomes a lot easier to instruct killing and to execute it, and even worse, to find cause and justification for it. It becomes an act of cold institutional administration rather than a moment of human judgment, and I don't think the state's apparatus for carrying out that type of premeditated murder that machines may one day allow needs to be expanded any further. Perfecting these types of weapons seems like a very slippery slope, and one I think the military should better avoid.

Also, for more reading:
'Military Omniscience'; America's robot army by Stephen Graham.


Anonymous Anonymous said...

Bryan, you might want to check out Kenneth Anderson's response to Charli at (Opinio Juris. It's a much more legal critique of her write-up, but you might find it useful. Matt Armstrong's also written quite a lot about the strategic communications of unmanned warfare at his blog, MountainRunner.

3:22 AM  

Post a Comment

<< Home