Wars of Ideas and killer robots

The book Wired for War, by P. W. Singer, is a fairly broad-ranging account of the history of how technology changes war, right up to the current rise in robotics. A number of “revolutions in military affairs” have occurred over the course of history, and there has been a very poor record of world powers making the transition before getting usurped by the early adopters. Nonetheless, the use of robots and drones in combat is the source of an ongoing debate. Still, war has existed for a long time, and as we approach a time when people may no longer be directly involved in the combat, the focus should probably be turned back to the reasons we fight.

I’m no historian, but I have some passing knowledge of major conflicts in recorded history, and I have the internet at my fingertips, retinas and cochleas. Nation states and empires began to appear in ancient times, and the Greeks and the Romans were happy to conquer as much as they could. In the Middle Ages the Islamic Empires expanded and the the European Christians fought them for lands they both consider holy. The Cold War between the Soviet Union and the United States of America was fought largely on the grounds of different societal and economic ideals: capitalism versus communism.

Though probably an oversimplification, I’m going to fit a trend to these events. The underlying basis for these wars could be described as strong convictions: “we are people of a different ethnicity or nation–we want resources, power and glory”; “we have religious beliefs–we want to spread our religion and we want what is holy to us”; and “we have different beliefs regarding how our society and economy should be structured–we want to subjugate the intellectually and morally inferior”.

Even more generally these thoughts boil down to ideas of national identity, religion and ethnicity, and ethics and morality. These beliefs often combine dangerously with ideas of infallibility, self-importance and superiority. If people didn’t take these ideas so seriously, many wars might not have occurred. There has recently been a reduction in wars among developed nations; most likely a result of the spread of democracy and moderate views. Nevertheless, the ideas of nationalism, ethnicity and religion are still deeply ingrained in many places and are significant factors in current tensions and wars all over the world. If there were strong enough economic incentives developed nations would likely still enter into conflicts.

Recent conflicts have been complicated by the lack of clear boundaries between sides. With the ideas underlying conflicts often coming from ethnicity and religion, the boundaries become blurry and the groups of people diffuse. Non-state actors emerge in larger areas and populations. As military technology gets more powerful and accessible, people holding fringe ideas can exert more, threat, force and damage than they ever could before. Explosives are a glaring example of this.

Robots are the source of the current debate though, even though groups with access to advanced robots are still mostly limited to advanced militaries and corporations. The main concerns that surround the use of robots are: wars will likely be easier to start and more common as countries don’t risk their own casualties; and concerns that autonomous robots might be worse at discriminating civilians from combatants.

Robots will almost certainly make wars less unattractive, but whether there being less reluctant to take part in wars is actually a bad thing is somewhat dependent on the wars and conflicts that are entered into. Peacekeeping would be a great use of robots, though perhaps not robots of the “killer” variety. Horrific conflicts are happening right now, and developed countries intervene minimally or not at all because of issues such as low economic incentives, UN vetoes, and the certain loss of life they would sustain.

No doubt it would be possible to start wars; probably a less noble practice than interventions in civil war and genocide. However, initiating wars is no longer an easy thing to do secretively these days. The proliferation of digital media recording devices and the internet make it much harder for wars to not draw international attention. But perhaps more important is that most developed countries that possess robots are the liberal democracies, where there is more to the opposition of war than just the loss of soldiers’ lives. This opposition to war is a large source of negative sentiment people have for killer robots in the first place.

Even though the “more wars” issue is far from resolved, let’s turn our attention to the use of killer robots in the conflict itself.

First, from a technical perspective, robots will one day almost certainly be more capable and more objective in determining the combatant/non-combatant status of people than human soldiers. Also the robots aren’t at risk of dying in the same way as a person, the need to rush decisions and retaliate with lethal force is reduced. But let’s return to the idea-centric view of conflict, and consider the use of robots in conflicts such as the “War on Terror“.

The drones being used in Pakistan and Afghanistan are being used against people that believe in the oppression of women and death sentences for blasphemers–people who oppose many things considered universal rights by the West. It seems that to many it’s a forgone conclusion that the “bad guys” need to be killed, and the main issue using robots and drones is civilian casualties. However, a real problem is that many “civilians” share the beliefs underlying the conflict, and at any moment the only difference between a civilian and a combatant might be whether they are firing a weapon or carrying a bomb.

Robotic war technology may get to the point of perfect accuracy and discrimination, but the fact will remain that the “combatants” are regular people fighting for their beliefs. If “perfect” robotics weapons were created that were capable of immediately killing any person who plants a bomb or shoots a rifle, this would be an incredible tool for war, or rather, oppression. I think that kind of oppression would deserve a lot of concern.

In spite of something as oppressive as a ubiquitous army of perfect killer robots, people in possession of  the right (or wrong) mixture of ideas, and strong enough conviction, won’t likely give up. Suicide-bombers don’t let death dissuade them. Is oppression and violence even the best response to profoundly incompatible beliefs and ideas? Even ideas that, themselves, advocate oppression and violence?

Counter-insurgencies are not conventional wars. Belief and ideas are central to their cause–the combatants aren’t going to give-in because their leader is killed or their land taken. The conflict is unlikely to end if the fighting only targets people, it needs to target their beliefs and ideas. Hence the conceived strategy to win the “hearts and minds” of the people. Ideas are not “defeated” until there aren’t any people who still dogmatically follow them.

While robotics look to be the next revolution in military affairs in conflict between nation states and counter-insurgencies, improvements in technology and techniques for influencing beliefs that are the cause for war might be a better revolution. To that end, rather than having robots that kill, a productive use of robots could be to safely educate, debate with, and persuade violent opponents to change beliefs and come to a peaceful resolution. Making robots capable of functioning as diplomats might be a bigger technical challenge than making robots that can distinguish civilians from combatants. But let’s be fanciful.

It continues to be a great tragedy that the ideas that give rise to conflict are themselves are rarely put to the test. It’s unfortunate, but I think it’s no coincidence. Many of the most persistent ideas–the ideas people fight to defend–are put on pedestals: challenging the idea is treason, blasphemy or, even worse, politically incorrect. 😐

Advertisements

4 responses to “Wars of Ideas and killer robots

  1. This is a great post! Excellently written, You seem to have covered all of the bases except for one that I would like your views on.

    The one issue I haven’t really seen discussed in the use of robots in combat is how doing so changes the existential or moral character of war- and in ways we should be troubled by.

    I tried to do this in my own review of Wired for War a while back:

    http://utopiaordystopia.com/2012/02/02/psychobot/

    though I wish I hadn’t so relied on the work of a neuroscience writer, Jonah Lehrer, who ended up being a serial plagiarist and fabricator. The brutal fact of the matter is that in wars people kill other people, sometimes- too often- innocent people. This willful killing results in real moral- trauma to the most humane of us. The military would have us think otherwise, but the countless number of soldiers who suffer PTSD after combat aren’t “sick”- which might be the reason they do not respond to anxiety and depression medication, rather they are perfectly normal. They killed someone, perhaps a child, they are tortured by it.

    To the military this is a “problem” they have to deal with. Robots do not have this “problem”.
    They are no more “troubled” if they kill an actual terrorist, a child, or an entire family. There is no difference between shooting candy at a crowd as shooting bullets- everything is just ones and zeroes.

    The poor and troubled soldier in this audio clip:

    http://www.npr.org/2012/11/21/165663154/moral-injury-the-psychological-wounds-of-war

    knows what war actually is, which our drones and robots, unless we program against our own military effectiveness, never will.

    • Hi Rick,

      Thanks for the comment! There is a lot more to be said on this and related topics. I hope I conveyed the concept of wars being fought over ideas and some of the implications of using robots intelligibly.

      I read your “Psychobot” post; I don’t think I had heard the term “moral injury” before, but that is certainly another important consideration in the application of robots in battle. Oversimplifying once again, I think it largely comes down to moral values and regret. Robots could be programmed to act as morally as the perfect person (though for many, “perfect” is open to interpretation). People feel guilt and regret when they have committed, or feel as though they’ve allowed to occur, something that is against their moral values. Robots could potentially also be programmed to feel guilt and regret, but it might be work examining why we feel guilt and regret.

      People acting against their morals or allowing immoral acts is particularly a problem in war, where factors outside of their control might be compelling them. As you are no doubt aware, many people have, historically, been conscripted to fight wars. They often had no strong attachment to the nations, empires or ideals that were the reason for the killing. When these people got to the battlefield they would have the choice to fight or be court-martialled. They would often have had no conviction that the killing they were doing was good. Rather, probably on the contrary: they would have thought the fighting was bad. Hence the trauma, guilt and regret–the moral injury. All the same, we could question what value moral injuries actually have individually and as a society.

      I haven’t listened to the entire podcast, but I’ll give you my preliminary thoughts. From a point of view, most people that suffer from a moral injury are healthy. They have legitimate feelings of guilt and regret for something that went against their moral beliefs. Guilt and regret can also be strong feelings that compel us to change our moral values, and if these changes are for the better that is good. But that moral injuries linger after the fact, it seems more like a punishment for being a moral human being than it is a helpful state of feeling. Just as many people would gladly be rid of their PTSD. In my view, becoming too emotional upon remembering an event can hinder our ability to learn from it. Just because something is a part of “being human” doesn’t mean that it is a “good” thing. Consider also that some people might feel regret for not doing things widely considered immoral when they had the opportunity to do so.

      There is a real issue though, in that robots don’t come with the same predisposition to abhorring war and violence as most people. In terms of calculating relative values, it might be possible one day for robots to be programmed to be as ethical and moral as the perfect person. The problem is that the people willing to start wars probably already have compromised values: they already see their cause as a justification for killing. If they get their way, the robots could be programmed to act in ways that would be widely considered immoral. However, considering regret and guilt as a impetus to changing future behaviour, regret could be greatly useful to robots and artificial intelligence. Even given good sets of principles for calculating the value of actions, robots would probably still benefit from being capable of learning from mistakes and correcting future actions. You might be interested to know that “regret” is actually used in machine learning terminology as a measure of the additional cost sustained compared (in hindsight) to the optimal actions.

      Toby

  2. All great points, Toby. Your thoughts regarding ideas and violence in your initial post echo in some ways those of Steven Pinker in his huge and relatively recent book Better Angels of our nature. I tried to take a stab at reviewing it here:

    http://utopiaordystopia.com/2012/11/01/a-utopian-reading-of-pinkers-better-angels-of-our-nature/

    In regards to your response to my comments, I wonder how much, if any research is be put into building robots with EVEN STRICTER rules of engagement that current US soldiers have now? Rules for robots such as NEVER kill civilians if the lives of living soldiers on your own side are not directly at risk. This would use the precision capabilities of robots and the idea that they are, for now, considered morally expendable for the protection of innocent human beings- something that sadly is precisely the opposite of the way the current “drone wars” appear to be being waged.

    After thinking it over some more I think this desire to hide ourselves from the existential characteristics of war, is even broader than our current novel use of “killer robots”.
    They can be seen in Pentagon research into altering human memory profiled here in Wired Magazine:

    http://www.wired.com/dangerroom/2011/12/fear-erasing-drugs/

    I think this research stems largely from humanitarian impulses, but nevertheless has real ethical dangers.

  3. Your review seems like a good critique of Steven Pinker’s book. I haven’t read much of Pinker’s work, and it sounds like ‘Better Angels of Our Nature’ is a bit of a marathon effort to read! If I remember rightly, you mentioned that Pinker thought poorly of utopian ideologies, blaming genocides, wars and the like on them. It seems to me that any ideology that can justify mass murder probably shouldn’t qualify as utopian. Problems certainly arise when people hold their beliefs with great conviction, and these beliefs discriminate groups and use violent methods of persuasion. I think Pinker is rather sceptical of human-like artificial intelligence, but for most, the positive ideas of the Technological Singularity are an inclusive utopia.

    I think there is starting to be more attention paid by philosophers and ethicists in the use of robots in conflict. It brings to light some of the views of military officials when they want to give drones the capacity to fight back if they are under fire. In essence they see the dollar value of the drone as more than the value of the lives of people. I agree that the rules for engagement should be much stricter for robots. It makes sense to me that weapons in general (including robots and drones) would be used only for self-defence. I don’t think it’s true to call much of what is currently being conducted “self-defence”.

    In general though, the technical challenges of making robots autonomous enough to operate in battle-zones without supervision, let alone reliably distinguishing between civilians and combatants, have yet to be overcome. We don’t have the technology to give machines the perceptual capabilities to make informed decisions yet, so it would still be unethical to use autonomous robots in my opinion. This is the issue many people are concerned about at the moment: the robots aren’t reliable enough to make decisions autonomously, not yet anyway.

    Remote-controlled robots and drones are still an issue, but partly for the same reason: the people controlling them make decisions based on incomplete or inconclusive information. P. W. Singer describes at least one instance of this, when a drone was used to kill a man in Afghanistan largely because he seemed to fit the height profile of Osama bin Laden. This also highlights the values under which the military operates whether fighting in person or from afar: civilian casualties are acceptable. If civilian casualties weren’t acceptable, they wouldn’t launch missiles unless they were absolutely certain no civilians were in danger.

    I agree that anything that seems to alter memories should be treated with caution. Though the PTSD treatment trial described in the Wired link does, as you suggest, seems to be for humanitarian reasons. I believe I’ve heard of this treatment before; it sounded like a good way to relieve the needless suffering and debilitation of PTSD. In addition, from what I remember hearing, the memories themselves weren’t erased, just the connection of the memories to the stress/fear response.

    There are certainly ethical issues that surround any treatment or technology that makes war a more attractive proposition. I’ll need to think some more about the view that we are trying to hide from the characteristics and consequences of war. There are certainly a lot of factors and feelings that lead to conflict. Unfortunately, even if they are identified, it doesn’t usually provide an immediate pragmatic solution.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s