Values and Artificial Super-Intelligence

Sorry, robot. Once the humans are gone, bringing them back will be hard.This is the sixth and final post on the current series on rewards and values. The topic discussed is the assessment of values as they could be applied to an artificial super-intelligence; what might be the final outcome and how might this help us choose “scalable” moral values.

First of all we should get acquainted with the notion of the technological singularity. One version of the idea goes: should we develop an artificial general intelligence that is capable of making itself more intelligent, it could do so repeatedly at accelerating speed. Before long the machine is vastly more intelligent and powerful than any person or organisation and essentially achieves god-like power. This version of the technological singularity appears to be far from a mainstream belief in the scientific community; however, anyone that believes consciousness and intelligence is solely the result of physical processes in the brain and body could rationally believe that this process could be simulated in a computer. It could logically follow that such a simulated intelligence could go about acquiring more computing resources to scale up some aspects of its intelligence and try to improve upon, and add to, the structure underlying its intelligence.

Many people that believe the that this technological singularity will occur are concerned that this AI could eliminate the human race, and potentially all life on Earth, for no more reason than we happen to be in the way of it achieving some goal. A whole non-profit organisation is devoted to trying to negate this risk. These people might be right in saying we can’t predict the actions of a super-intelligent machine – with the ability to choose what it would do, predicting its actions could require the same or a greater amount of intelligence. But the assumption usually goes that the machine will have some value-function that it will be trying to operate under and achieve the maximum value possible. This has been accompanied by interesting twists in how some people define a “mind” and there not being an obvious definition of “intelligence”. Nonetheless this concern has apparently caused some people at least one researcher being threatened with death. (People do crazy things in the name of their beliefs.)

A favourite metaphor for an unwanted result is the “paperclip maximiser“: a powerful machine devoted to turning all the material of the universe into paperclips. The machine may have wanted to increase the “order” of the universe and thought paperclips were especially useful to that end, and settled on turning everything into paperclips. Other values could result in equally undesirable outcomes, the same article describes another scenario where an alternative of utilitarianism might have the machine maximising smiles by turning the world and everything else into smiley faces. This is a rather unusual step for an “intelligent” machine; somehow the machine skipped any theory of mind and went straight to equating happiness with smiley faces. Nonetheless, other ideas of what we should value might not fare much better. It makes some sense to replace the haphazard way we seek pleasure with electrodes in our brains, if pleasure is our end goal. By some methods of calculation, suffering could best be minimised by euthanising all life. Of course, throughout this blog series I’ve been painting rewards and values (including their proxies pleasure and pain)  not as ends, but feedback (or feelings) we’ve evolved for the sake of learning how to survive.

If we consider the thesis of Sam Harris, that there is a “moral landscape“, then choosing a system of morals and values is a matter of optimisation. Sam Harris thinks we should be maximising the well-being of conscious creatures. And this belief of well-being as an optimisable quantity could lead us to consider morality as an engineering problem. Well-being, however, might be a little bit too vague for a machine to develop a function for calculating the value of all actions. Our intuitive human system of making approximate mental calculations of the moral value of actions might be very difficult for a computer to reproduce without simulating a human mind. And humans are notoriously irregular in their beliefs of what is moral behaviour.

In the last post I raised the idea of valuing what pleasure and pain teach us about ourselves and the world. This could be generalised to valuing all learning and information – valuing taking part in and learning the range of human experience such as music, sights, food, personal relationships, as well as learning about and making new scientific observations and discovering the physical laws of the universe. Furthermore, the physical embodiment of information within the universe as structures of matter and energy such as living organisms could also lead us to consider all life as inherently valuable too. Now this raises plenty of questions. What actually counts as information and how would we measure it? Can there really be any inherent value in information? Can we really say that all life and some non-living structures are the embodiment of information? How might valuing information and learning in as ends in themselves suggest we should live? What would an artificial super-intelligence do under this system of valuing information? Questions such as these could be fertile grounds for discussion. In starting a new series of blog posts I hope to explore the ideas and hopefully receive some feedback from anyone who reads this blog.

And thus ends this first blog series on rewards and values. A range of related topics were covered: the origin of felt rewards being within the brain, the representation of the world as an important aspect of associating values, the self-rewarding capabilities that might benefit autonomous robots, the likely evolutionary origin of rewards in biological organisms, and the development of morality and ethics as a process of maximising that which is valued. The ideas in some of these posts may not have been particularly rigorously argued or cited, so everything written should be taken with a grain of salt. Corrections and suggestions are most certainly welcome! I hope you will join me exploring more ideas and taking a few more mental leaps in future.

Advertisements

2 responses to “Values and Artificial Super-Intelligence

  1. “anyone that believes consciousness and intelligence is solely the result of physical processes in the brain and body could rationally believe that this process could be simulated in a computer.”

    There is quite a lot in that statement.

    First of all, it says “simulated”. Is simulation the same as the real thing?

    Second, it combines intelligence and consciousness. Consciousness has a subjective element or qualia whereas intelligence would not necessarily have this element.

    Third, it presupposes that consciousness is all of one sort. Yet if consciousness is really different between octopus, bat, cat, and human, then consciousness if it could even said to exist in a computer might be of different sort than that of biological consciousness.

    I believe that consciousness is a result of physical processes but the consciousness of a human cannot be simulated on a computer. Notice, however, I say human consciousness cannot be simulated on computer not that a computer might be able to fool another human into thinking it is conscious through behavior.

    http://broadspeculations.com/2012/10/21/floating/

    • Hi James,

      Thanks for your interest in this blog post, and sorry for the delay in responding to your questions.

      [I wrote this reply before reading your ‘Floating’ post, which covers some really interesting ground. I’ve left the reply unaltered, so you can see what my thoughts were before reading your article. I hope to get a chance to make some comments on your blog post(s) in the near future!]

      My current thinking is that the line between simulation and the “real thing” has a significant dependence on the detail and completeness of the simulation. I tend to think that a perfect simulation of the contents and physics of the universe would be indistinguishable from the “real thing” to the subjective awareness and experience of conscious entities being simulated. How would we know that what we are experiencing is real? We could all currently be inside a simulation. Of course, that presupposes that consciousness could be simulated. 🙂

      Trying to use the words ‘intelligence’ and ‘consciousness’ in the statement above was probably a bit unclear, since ‘intelligence’ and ‘consciousness’ have different meaning to different people, and have meanings that potentially change in different contexts. (And I seem to have made a grammatical mistake that could have created some additional confusion: …anyone that believes consciousness and intelligence [are] solely the result of physical processes…) From an artificial intelligence perspective, most of what people do requires some sort of ‘intelligence’ – artificial general intelligence is concerned with try to create an “intelligence” capable of everything we do. ‘Consciousness’ is even pricklier; perhaps it is a “meta-quale” that can really only be defined by the experience of consciousness. That said, at the time I believe I was operating under some vague definitions: intelligence – the collection of abilities that project meaning on to what is sensed, and; consciousness – immediate awareness or perception, whether of the external (e.g., from senses such as sight) or internal (e.g., reliving memories or thinking to oneself).

      Although I’m not an expert on qualia, I suspect that there is a lot in common with the ‘symbol grounding problem’. From the few arguments I’ve read for and against qualia, I believe they exist, but that they could be explained by the physical processes occurring from the interaction of the world with the sensory nervous system and perceptual processing performed by the brain. Part of what makes conscious so mysterious is that we have very little (if any) awareness of the physical operation of the underlying brain structures. If we had a perfect understanding of how the brain works, I’m fairly sure that questions of consciousness, qualia and simulated reproductions would be cleared up, one way or another.

      In terms of comparing types of consciousnesses in different animals, it is difficult to know what their consciousness is like without firsthand experience. Animals clearly have different intellectual and perceptual capabilities, and this could be expected to be reflected in their capacity for consciousness/awareness. Of course the “hardware” that is being used to experience the world makes a big difference: mammals in particular share many of the same brain structures; cephalopods, like octopuses, as I understand it, have many distinctly different brain structures from mammals. The consciousness of other mammals is probably more like human consciousness than that of non-mammals.

      Following from this, if we are initially trying to make a computer “conscious”, its sensory hardware and set of perceptual and cognitive abilities will determine what it could be said to be “aware” of. Whether the same sort of subjective experience of consciousness as what we feel occurs in the machine might depend on how closely the architecture of the artificial intelligence resembles the functional structure of the human brain. At a fine enough level of simulated reproduction of a biological brain, I don’t see why the machine-based consciousness couldn’t be indistinguishable from a biological one, even down to the subjective experience of the machine.

      Have you discussed the reason why you believe that human consciousness is the result of physical processes, yet not reproducible in simulation?

      Have you ever heard of the following hypothetical experiment? – Consider a person that has a small part of their brain replaced with an electronic component that performs the exact same function and interfaces with the remaining brain perfectly. That person should report no change to their feeling of consciousness, and we could probably say that they still have a human consciousness. Then suppose we repeat that process, over and over, until eventually their brain is completely electronic. At each step they should report feeling no different. But if consciousness cannot be simulated, then at some stage we would expect them to cease to have a human consciousness.

      What are your thoughts on this scenario?

      Thanks again for your questions, and I look forward to reading more on your thoughts and ideas.

      Toby

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s