Wars of Ideas and killer robots

The book Wired for War, by P. W. Singer, is a fairly broad-ranging account of the history of how technology changes war, right up to the current rise in robotics. A number of “revolutions in military affairs” have occurred over the course of history, and there has been a very poor record of world powers making the transition before getting usurped by the early adopters. Nonetheless, the use of robots and drones in combat is the source of an ongoing debate. Still, war has existed for a long time, and as we approach a time when people may no longer be directly involved in the combat, the focus should probably be turned back to the reasons we fight.

I’m no historian, but I have some passing knowledge of major conflicts in recorded history, and I have the internet at my fingertips, retinas and cochleas. Nation states and empires began to appear in ancient times, and the Greeks and the Romans were happy to conquer as much as they could. In the Middle Ages the Islamic Empires expanded and the the European Christians fought them for lands they both consider holy. The Cold War between the Soviet Union and the United States of America was fought largely on the grounds of different societal and economic ideals: capitalism versus communism.

Though probably an oversimplification, I’m going to fit a trend to these events. The underlying basis for these wars could be described as strong convictions: “we are people of a different ethnicity or nation–we want resources, power and glory”; “we have religious beliefs–we want to spread our religion and we want what is holy to us”; and “we have different beliefs regarding how our society and economy should be structured–we want to subjugate the intellectually and morally inferior”.

Even more generally these thoughts boil down to ideas of national identity, religion and ethnicity, and ethics and morality. These beliefs often combine dangerously with ideas of infallibility, self-importance and superiority. If people didn’t take these ideas so seriously, many wars might not have occurred. There has recently been a reduction in wars among developed nations; most likely a result of the spread of democracy and moderate views. Nevertheless, the ideas of nationalism, ethnicity and religion are still deeply ingrained in many places and are significant factors in current tensions and wars all over the world. If there were strong enough economic incentives developed nations would likely still enter into conflicts.

Recent conflicts have been complicated by the lack of clear boundaries between sides. With the ideas underlying conflicts often coming from ethnicity and religion, the boundaries become blurry and the groups of people diffuse. Non-state actors emerge in larger areas and populations. As military technology gets more powerful and accessible, people holding fringe ideas can exert more, threat, force and damage than they ever could before. Explosives are a glaring example of this.

Robots are the source of the current debate though, even though groups with access to advanced robots are still mostly limited to advanced militaries and corporations. The main concerns that surround the use of robots are: wars will likely be easier to start and more common as countries don’t risk their own casualties; and concerns that autonomous robots might be worse at discriminating civilians from combatants.

Robots will almost certainly make wars less unattractive, but whether there being less reluctant to take part in wars is actually a bad thing is somewhat dependent on the wars and conflicts that are entered into. Peacekeeping would be a great use of robots, though perhaps not robots of the “killer” variety. Horrific conflicts are happening right now, and developed countries intervene minimally or not at all because of issues such as low economic incentives, UN vetoes, and the certain loss of life they would sustain.

No doubt it would be possible to start wars; probably a less noble practice than interventions in civil war and genocide. However, initiating wars is no longer an easy thing to do secretively these days. The proliferation of digital media recording devices and the internet make it much harder for wars to not draw international attention. But perhaps more important is that most developed countries that possess robots are the liberal democracies, where there is more to the opposition of war than just the loss of soldiers’ lives. This opposition to war is a large source of negative sentiment people have for killer robots in the first place.

Even though the “more wars” issue is far from resolved, let’s turn our attention to the use of killer robots in the conflict itself.

First, from a technical perspective, robots will one day almost certainly be more capable and more objective in determining the combatant/non-combatant status of people than human soldiers. Also the robots aren’t at risk of dying in the same way as a person, the need to rush decisions and retaliate with lethal force is reduced. But let’s return to the idea-centric view of conflict, and consider the use of robots in conflicts such as the “War on Terror“.

The drones being used in Pakistan and Afghanistan are being used against people that believe in the oppression of women and death sentences for blasphemers–people who oppose many things considered universal rights by the West. It seems that to many it’s a forgone conclusion that the “bad guys” need to be killed, and the main issue using robots and drones is civilian casualties. However, a real problem is that many “civilians” share the beliefs underlying the conflict, and at any moment the only difference between a civilian and a combatant might be whether they are firing a weapon or carrying a bomb.

Robotic war technology may get to the point of perfect accuracy and discrimination, but the fact will remain that the “combatants” are regular people fighting for their beliefs. If “perfect” robotics weapons were created that were capable of immediately killing any person who plants a bomb or shoots a rifle, this would be an incredible tool for war, or rather, oppression. I think that kind of oppression would deserve a lot of concern.

In spite of something as oppressive as a ubiquitous army of perfect killer robots, people in possession of  the right (or wrong) mixture of ideas, and strong enough conviction, won’t likely give up. Suicide-bombers don’t let death dissuade them. Is oppression and violence even the best response to profoundly incompatible beliefs and ideas? Even ideas that, themselves, advocate oppression and violence?

Counter-insurgencies are not conventional wars. Belief and ideas are central to their cause–the combatants aren’t going to give-in because their leader is killed or their land taken. The conflict is unlikely to end if the fighting only targets people, it needs to target their beliefs and ideas. Hence the conceived strategy to win the “hearts and minds” of the people. Ideas are not “defeated” until there aren’t any people who still dogmatically follow them.

While robotics look to be the next revolution in military affairs in conflict between nation states and counter-insurgencies, improvements in technology and techniques for influencing beliefs that are the cause for war might be a better revolution. To that end, rather than having robots that kill, a productive use of robots could be to safely educate, debate with, and persuade violent opponents to change beliefs and come to a peaceful resolution. Making robots capable of functioning as diplomats might be a bigger technical challenge than making robots that can distinguish civilians from combatants. But let’s be fanciful.

It continues to be a great tragedy that the ideas that give rise to conflict are themselves are rarely put to the test. It’s unfortunate, but I think it’s no coincidence. Many of the most persistent ideas–the ideas people fight to defend–are put on pedestals: challenging the idea is treason, blasphemy or, even worse, politically incorrect. 😐

Values and Artificial Super-Intelligence

Sorry, robot. Once the humans are gone, bringing them back will be hard.This is the sixth and final post on the current series on rewards and values. The topic discussed is the assessment of values as they could be applied to an artificial super-intelligence; what might be the final outcome and how might this help us choose “scalable” moral values.

First of all we should get acquainted with the notion of the technological singularity. One version of the idea goes: should we develop an artificial general intelligence that is capable of making itself more intelligent, it could do so repeatedly at accelerating speed. Before long the machine is vastly more intelligent and powerful than any person or organisation and essentially achieves god-like power. This version of the technological singularity appears to be far from a mainstream belief in the scientific community; however, anyone that believes consciousness and intelligence is solely the result of physical processes in the brain and body could rationally believe that this process could be simulated in a computer. It could logically follow that such a simulated intelligence could go about acquiring more computing resources to scale up some aspects of its intelligence and try to improve upon, and add to, the structure underlying its intelligence.

Many people that believe the that this technological singularity will occur are concerned that this AI could eliminate the human race, and potentially all life on Earth, for no more reason than we happen to be in the way of it achieving some goal. A whole non-profit organisation is devoted to trying to negate this risk. These people might be right in saying we can’t predict the actions of a super-intelligent machine – with the ability to choose what it would do, predicting its actions could require the same or a greater amount of intelligence. But the assumption usually goes that the machine will have some value-function that it will be trying to operate under and achieve the maximum value possible. This has been accompanied by interesting twists in how some people define a “mind” and there not being an obvious definition of “intelligence”. Nonetheless this concern has apparently caused some people at least one researcher being threatened with death. (People do crazy things in the name of their beliefs.)

A favourite metaphor for an unwanted result is the “paperclip maximiser“: a powerful machine devoted to turning all the material of the universe into paperclips. The machine may have wanted to increase the “order” of the universe and thought paperclips were especially useful to that end, and settled on turning everything into paperclips. Other values could result in equally undesirable outcomes, the same article describes another scenario where an alternative of utilitarianism might have the machine maximising smiles by turning the world and everything else into smiley faces. This is a rather unusual step for an “intelligent” machine; somehow the machine skipped any theory of mind and went straight to equating happiness with smiley faces. Nonetheless, other ideas of what we should value might not fare much better. It makes some sense to replace the haphazard way we seek pleasure with electrodes in our brains, if pleasure is our end goal. By some methods of calculation, suffering could best be minimised by euthanising all life. Of course, throughout this blog series I’ve been painting rewards and values (including their proxies pleasure and pain)  not as ends, but feedback (or feelings) we’ve evolved for the sake of learning how to survive.

If we consider the thesis of Sam Harris, that there is a “moral landscape“, then choosing a system of morals and values is a matter of optimisation. Sam Harris thinks we should be maximising the well-being of conscious creatures. And this belief of well-being as an optimisable quantity could lead us to consider morality as an engineering problem. Well-being, however, might be a little bit too vague for a machine to develop a function for calculating the value of all actions. Our intuitive human system of making approximate mental calculations of the moral value of actions might be very difficult for a computer to reproduce without simulating a human mind. And humans are notoriously irregular in their beliefs of what is moral behaviour.

In the last post I raised the idea of valuing what pleasure and pain teach us about ourselves and the world. This could be generalised to valuing all learning and information – valuing taking part in and learning the range of human experience such as music, sights, food, personal relationships, as well as learning about and making new scientific observations and discovering the physical laws of the universe. Furthermore, the physical embodiment of information within the universe as structures of matter and energy such as living organisms could also lead us to consider all life as inherently valuable too. Now this raises plenty of questions. What actually counts as information and how would we measure it? Can there really be any inherent value in information? Can we really say that all life and some non-living structures are the embodiment of information? How might valuing information and learning in as ends in themselves suggest we should live? What would an artificial super-intelligence do under this system of valuing information? Questions such as these could be fertile grounds for discussion. In starting a new series of blog posts I hope to explore the ideas and hopefully receive some feedback from anyone who reads this blog.

And thus ends this first blog series on rewards and values. A range of related topics were covered: the origin of felt rewards being within the brain, the representation of the world as an important aspect of associating values, the self-rewarding capabilities that might benefit autonomous robots, the likely evolutionary origin of rewards in biological organisms, and the development of morality and ethics as a process of maximising that which is valued. The ideas in some of these posts may not have been particularly rigorously argued or cited, so everything written should be taken with a grain of salt. Corrections and suggestions are most certainly welcome! I hope you will join me exploring more ideas and taking a few more mental leaps in future.

Simulating stimuli and moral values

This is the fifth post in a series about rewards and values. Previously the neurological origins for pleasure and reward in biological organisms were touched on, and the evolution of pleasure and the discovery of supernormal stimuli were mentioned. This post highlights some issues surrounding happiness and pleasure as ends to be sought.

First let’s refresh: we have evolved sensations and feelings including pleasure and happiness. These feelings are designed to enhance our survival in the world in which they were developed; the prehistoric world where survival was tenuous and selection favoured the “fittest”. This process of evolving first the base feelings of pleasure, wanting and desire, that later extended to the warm social feelings of friendship, attachment and social contact, couldn’t account the facility we now have for tricking these neural systems into strong, but ‘false’, positives. Things like drugs, pornography and facebook, all can deliver large doses of pleasure from directly stimulating the brain or simulating what had been evolved to be pleasurable experiences.

So where does that get us? In the world of various forms of utilitarianism we are usually trying to maximum some value. By my understanding, in plain utilitarianism the aim is to maximise happiness (sometimes described as increasing pleasure and reducing suffering), in hedonism the aim is sensual pleasure, and in preference utilitarianism it is the satisfaction of preferences. Pleasure may once have seemed like a good pursuit, but now that we have methods of creating pleasure at the push of a button, that hardly seems like a “good” way to live – being hooked up to a machine. And if we consider that our life-long search for pleasure as an ineffective process of trying to find out how to push our biological buttons, pleasure may seem like a fairly poor yardstick for measuring “good”.

Happiness is also a mental state that people have varying degrees of success in attaining. Just because we haven’t had the same success in creating happiness “artificially” it doesn’t mean that it is a much better end to seek. Of course the difficulty of living with depression is undesirable, but if we all could become happy at the push of a button the feeling might lose some value. Even the more abstract idea of satisfying preferences might not get us much further, since many of our preferences are for avoiding suffering and attaining pleasure and happiness.

Of course in all this we might be forgetting (or ignoring the perspective) that pleasure and pain were evolved responses to inform us of how to survive. And here comes a leap:

Instead of valuing feelings we could value an important underlying result of the feelings: learning about ourselves and the world.

The general idea of valuing learning and experience might not be entirely new; Buddhism has long been about seeking enlightenment to relieve suffering and find happiness. However, considering learning and gaining experience as valuable ends, and the pleasure, pain or happiness they might arouse as additional aspects of those experiences, isn’t something I’ve seen as part of the discussion of moral values. Clearly there are causes of pleasure and suffering that cause debilitation or don’t result in any “useful” learning, e.g., drug abuse and bodily mutilation, so these should be avoided. But where would a system of ethics and morality based on valuing learning and experience take us?

This idea will be extended and fleshed out in much more detail in a new blog post series starting soon. To conclude this series on rewards and values, I’ll describe an interesting thought experiment for evaluating systems of value: what would an (essentially) omnipotent artificial intelligence do if maximising those values?